00:00:00.000 Started by upstream project "autotest-per-patch" build number 132045 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:12.717 The recommended git tool is: git 00:00:12.718 using credential 00000000-0000-0000-0000-000000000002 00:00:12.720 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:12.734 Fetching changes from the remote Git repository 00:00:12.737 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:12.749 Using shallow fetch with depth 1 00:00:12.749 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:12.749 > git --version # timeout=10 00:00:12.760 > git --version # 'git version 2.39.2' 00:00:12.760 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:12.774 Setting http proxy: proxy-dmz.intel.com:911 00:00:12.774 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:16.986 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:16.998 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:17.013 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:17.013 > git config core.sparsecheckout # timeout=10 00:00:17.025 > git read-tree -mu HEAD # timeout=10 00:00:17.042 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:17.060 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:17.060 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:17.144 [Pipeline] Start of Pipeline 00:00:17.155 [Pipeline] library 00:00:17.157 Loading library shm_lib@master 00:00:17.157 Library shm_lib@master is cached. Copying from home. 00:00:17.172 [Pipeline] node 00:00:17.181 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:17.182 [Pipeline] { 00:00:17.190 [Pipeline] catchError 00:00:17.192 [Pipeline] { 00:00:17.204 [Pipeline] wrap 00:00:17.211 [Pipeline] { 00:00:17.220 [Pipeline] stage 00:00:17.221 [Pipeline] { (Prologue) 00:00:17.234 [Pipeline] echo 00:00:17.235 Node: VM-host-SM17 00:00:17.240 [Pipeline] cleanWs 00:00:17.248 [WS-CLEANUP] Deleting project workspace... 00:00:17.248 [WS-CLEANUP] Deferred wipeout is used... 00:00:17.253 [WS-CLEANUP] done 00:00:17.433 [Pipeline] setCustomBuildProperty 00:00:17.497 [Pipeline] httpRequest 00:00:17.896 [Pipeline] echo 00:00:17.899 Sorcerer 10.211.164.101 is alive 00:00:17.909 [Pipeline] retry 00:00:17.911 [Pipeline] { 00:00:17.926 [Pipeline] httpRequest 00:00:17.930 HttpMethod: GET 00:00:17.931 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:17.931 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:17.953 Response Code: HTTP/1.1 200 OK 00:00:17.953 Success: Status code 200 is in the accepted range: 200,404 00:00:17.954 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:35.316 [Pipeline] } 00:00:35.333 [Pipeline] // retry 00:00:35.341 [Pipeline] sh 00:00:35.625 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:35.640 [Pipeline] httpRequest 00:00:36.318 [Pipeline] echo 00:00:36.320 Sorcerer 10.211.164.101 is alive 00:00:36.330 [Pipeline] retry 00:00:36.332 [Pipeline] { 00:00:36.345 [Pipeline] httpRequest 00:00:36.350 HttpMethod: GET 00:00:36.351 URL: http://10.211.164.101/packages/spdk_16e58adb10c537c7227ae1815defb93b523e7b4a.tar.gz 00:00:36.351 Sending request to url: http://10.211.164.101/packages/spdk_16e58adb10c537c7227ae1815defb93b523e7b4a.tar.gz 00:00:36.356 Response Code: HTTP/1.1 200 OK 00:00:36.357 Success: Status code 200 is in the accepted range: 200,404 00:00:36.358 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_16e58adb10c537c7227ae1815defb93b523e7b4a.tar.gz 00:03:41.364 [Pipeline] } 00:03:41.384 [Pipeline] // retry 00:03:41.391 [Pipeline] sh 00:03:41.672 + tar --no-same-owner -xf spdk_16e58adb10c537c7227ae1815defb93b523e7b4a.tar.gz 00:03:44.972 [Pipeline] sh 00:03:45.253 + git -C spdk log --oneline -n5 00:03:45.253 16e58adb1 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:03:45.253 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:03:45.253 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:03:45.253 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:03:45.253 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:03:45.271 [Pipeline] writeFile 00:03:45.287 [Pipeline] sh 00:03:45.569 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:45.583 [Pipeline] sh 00:03:45.910 + cat autorun-spdk.conf 00:03:45.911 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.911 SPDK_TEST_NVMF=1 00:03:45.911 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.911 SPDK_TEST_URING=1 00:03:45.911 SPDK_TEST_USDT=1 00:03:45.911 SPDK_RUN_UBSAN=1 00:03:45.911 NET_TYPE=virt 00:03:45.911 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:45.917 RUN_NIGHTLY=0 00:03:45.919 [Pipeline] } 00:03:45.933 [Pipeline] // stage 00:03:45.948 [Pipeline] stage 00:03:45.950 [Pipeline] { (Run VM) 00:03:45.963 [Pipeline] sh 00:03:46.243 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:46.243 + echo 'Start stage prepare_nvme.sh' 00:03:46.243 Start stage prepare_nvme.sh 00:03:46.243 + [[ -n 6 ]] 00:03:46.243 + disk_prefix=ex6 00:03:46.243 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:03:46.244 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:03:46.244 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:03:46.244 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:46.244 ++ SPDK_TEST_NVMF=1 00:03:46.244 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:46.244 ++ SPDK_TEST_URING=1 00:03:46.244 ++ SPDK_TEST_USDT=1 00:03:46.244 ++ SPDK_RUN_UBSAN=1 00:03:46.244 ++ NET_TYPE=virt 00:03:46.244 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:46.244 ++ RUN_NIGHTLY=0 00:03:46.244 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:46.244 + nvme_files=() 00:03:46.244 + declare -A nvme_files 00:03:46.244 + backend_dir=/var/lib/libvirt/images/backends 00:03:46.244 + nvme_files['nvme.img']=5G 00:03:46.244 + nvme_files['nvme-cmb.img']=5G 00:03:46.244 + nvme_files['nvme-multi0.img']=4G 00:03:46.244 + nvme_files['nvme-multi1.img']=4G 00:03:46.244 + nvme_files['nvme-multi2.img']=4G 00:03:46.244 + nvme_files['nvme-openstack.img']=8G 00:03:46.244 + nvme_files['nvme-zns.img']=5G 00:03:46.244 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:46.244 + (( SPDK_TEST_FTL == 1 )) 00:03:46.244 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:46.244 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:46.244 + for nvme in "${!nvme_files[@]}" 00:03:46.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:03:46.244 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:46.244 + for nvme in "${!nvme_files[@]}" 00:03:46.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:03:46.244 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:46.244 + for nvme in "${!nvme_files[@]}" 00:03:46.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:03:46.244 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:46.244 + for nvme in "${!nvme_files[@]}" 00:03:46.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:03:46.244 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:46.244 + for nvme in "${!nvme_files[@]}" 00:03:46.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:03:46.244 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:46.244 + for nvme in "${!nvme_files[@]}" 00:03:46.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:03:46.244 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:46.244 + for nvme in "${!nvme_files[@]}" 00:03:46.244 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:03:46.812 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:46.812 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:03:46.812 + echo 'End stage prepare_nvme.sh' 00:03:46.812 End stage prepare_nvme.sh 00:03:46.823 [Pipeline] sh 00:03:47.104 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:47.104 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:03:47.104 00:03:47.104 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:03:47.104 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:03:47.104 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:47.104 HELP=0 00:03:47.104 DRY_RUN=0 00:03:47.104 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:03:47.104 NVME_DISKS_TYPE=nvme,nvme, 00:03:47.104 NVME_AUTO_CREATE=0 00:03:47.104 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:03:47.104 NVME_CMB=,, 00:03:47.104 NVME_PMR=,, 00:03:47.104 NVME_ZNS=,, 00:03:47.104 NVME_MS=,, 00:03:47.104 NVME_FDP=,, 00:03:47.104 SPDK_VAGRANT_DISTRO=fedora39 00:03:47.104 SPDK_VAGRANT_VMCPU=10 00:03:47.104 SPDK_VAGRANT_VMRAM=12288 00:03:47.104 SPDK_VAGRANT_PROVIDER=libvirt 00:03:47.104 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:47.104 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:47.104 SPDK_OPENSTACK_NETWORK=0 00:03:47.104 VAGRANT_PACKAGE_BOX=0 00:03:47.104 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:47.104 FORCE_DISTRO=true 00:03:47.104 VAGRANT_BOX_VERSION= 00:03:47.104 EXTRA_VAGRANTFILES= 00:03:47.104 NIC_MODEL=e1000 00:03:47.104 00:03:47.104 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:03:47.104 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:49.636 Bringing machine 'default' up with 'libvirt' provider... 00:03:50.205 ==> default: Creating image (snapshot of base box volume). 00:03:50.467 ==> default: Creating domain with the following settings... 00:03:50.467 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730739890_9e7b5b38211f9a7e20f7 00:03:50.467 ==> default: -- Domain type: kvm 00:03:50.467 ==> default: -- Cpus: 10 00:03:50.467 ==> default: -- Feature: acpi 00:03:50.467 ==> default: -- Feature: apic 00:03:50.467 ==> default: -- Feature: pae 00:03:50.467 ==> default: -- Memory: 12288M 00:03:50.467 ==> default: -- Memory Backing: hugepages: 00:03:50.467 ==> default: -- Management MAC: 00:03:50.467 ==> default: -- Loader: 00:03:50.467 ==> default: -- Nvram: 00:03:50.467 ==> default: -- Base box: spdk/fedora39 00:03:50.467 ==> default: -- Storage pool: default 00:03:50.467 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730739890_9e7b5b38211f9a7e20f7.img (20G) 00:03:50.468 ==> default: -- Volume Cache: default 00:03:50.468 ==> default: -- Kernel: 00:03:50.468 ==> default: -- Initrd: 00:03:50.468 ==> default: -- Graphics Type: vnc 00:03:50.468 ==> default: -- Graphics Port: -1 00:03:50.468 ==> default: -- Graphics IP: 127.0.0.1 00:03:50.468 ==> default: -- Graphics Password: Not defined 00:03:50.468 ==> default: -- Video Type: cirrus 00:03:50.468 ==> default: -- Video VRAM: 9216 00:03:50.468 ==> default: -- Sound Type: 00:03:50.468 ==> default: -- Keymap: en-us 00:03:50.468 ==> default: -- TPM Path: 00:03:50.468 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:50.468 ==> default: -- Command line args: 00:03:50.468 ==> default: -> value=-device, 00:03:50.468 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:50.468 ==> default: -> value=-drive, 00:03:50.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:03:50.468 ==> default: -> value=-device, 00:03:50.468 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:50.468 ==> default: -> value=-device, 00:03:50.468 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:50.468 ==> default: -> value=-drive, 00:03:50.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:50.468 ==> default: -> value=-device, 00:03:50.468 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:50.468 ==> default: -> value=-drive, 00:03:50.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:50.468 ==> default: -> value=-device, 00:03:50.468 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:50.468 ==> default: -> value=-drive, 00:03:50.468 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:50.468 ==> default: -> value=-device, 00:03:50.468 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:50.727 ==> default: Creating shared folders metadata... 00:03:50.727 ==> default: Starting domain. 00:03:52.109 ==> default: Waiting for domain to get an IP address... 00:04:10.214 ==> default: Waiting for SSH to become available... 00:04:10.214 ==> default: Configuring and enabling network interfaces... 00:04:12.750 default: SSH address: 192.168.121.142:22 00:04:12.750 default: SSH username: vagrant 00:04:12.750 default: SSH auth method: private key 00:04:14.654 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:22.862 ==> default: Mounting SSHFS shared folder... 00:04:23.842 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:23.842 ==> default: Checking Mount.. 00:04:25.222 ==> default: Folder Successfully Mounted! 00:04:25.222 ==> default: Running provisioner: file... 00:04:26.159 default: ~/.gitconfig => .gitconfig 00:04:26.418 00:04:26.418 SUCCESS! 00:04:26.418 00:04:26.418 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:26.418 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:26.418 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:26.418 00:04:26.427 [Pipeline] } 00:04:26.441 [Pipeline] // stage 00:04:26.451 [Pipeline] dir 00:04:26.452 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:04:26.454 [Pipeline] { 00:04:26.468 [Pipeline] catchError 00:04:26.470 [Pipeline] { 00:04:26.484 [Pipeline] sh 00:04:26.765 + vagrant ssh-config --host vagrant 00:04:26.765 + sed -ne /^Host/,$p 00:04:26.765 + tee ssh_conf 00:04:30.956 Host vagrant 00:04:30.956 HostName 192.168.121.142 00:04:30.956 User vagrant 00:04:30.956 Port 22 00:04:30.956 UserKnownHostsFile /dev/null 00:04:30.956 StrictHostKeyChecking no 00:04:30.956 PasswordAuthentication no 00:04:30.956 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:30.956 IdentitiesOnly yes 00:04:30.956 LogLevel FATAL 00:04:30.956 ForwardAgent yes 00:04:30.956 ForwardX11 yes 00:04:30.956 00:04:30.969 [Pipeline] withEnv 00:04:30.972 [Pipeline] { 00:04:30.985 [Pipeline] sh 00:04:31.267 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:31.267 source /etc/os-release 00:04:31.267 [[ -e /image.version ]] && img=$(< /image.version) 00:04:31.267 # Minimal, systemd-like check. 00:04:31.267 if [[ -e /.dockerenv ]]; then 00:04:31.267 # Clear garbage from the node's name: 00:04:31.267 # agt-er_autotest_547-896 -> autotest_547-896 00:04:31.267 # $HOSTNAME is the actual container id 00:04:31.267 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:31.267 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:31.267 # We can assume this is a mount from a host where container is running, 00:04:31.267 # so fetch its hostname to easily identify the target swarm worker. 00:04:31.267 container="$(< /etc/hostname) ($agent)" 00:04:31.267 else 00:04:31.267 # Fallback 00:04:31.267 container=$agent 00:04:31.267 fi 00:04:31.267 fi 00:04:31.267 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:31.267 00:04:31.543 [Pipeline] } 00:04:31.559 [Pipeline] // withEnv 00:04:31.568 [Pipeline] setCustomBuildProperty 00:04:31.583 [Pipeline] stage 00:04:31.586 [Pipeline] { (Tests) 00:04:31.601 [Pipeline] sh 00:04:31.879 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:31.891 [Pipeline] sh 00:04:32.169 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:32.442 [Pipeline] timeout 00:04:32.442 Timeout set to expire in 1 hr 0 min 00:04:32.444 [Pipeline] { 00:04:32.458 [Pipeline] sh 00:04:32.737 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:33.305 HEAD is now at 16e58adb1 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header. 00:04:33.317 [Pipeline] sh 00:04:33.598 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:33.870 [Pipeline] sh 00:04:34.150 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:34.166 [Pipeline] sh 00:04:34.445 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:04:34.704 ++ readlink -f spdk_repo 00:04:34.704 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:34.704 + [[ -n /home/vagrant/spdk_repo ]] 00:04:34.704 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:34.704 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:34.704 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:34.704 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:34.704 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:34.704 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:04:34.704 + cd /home/vagrant/spdk_repo 00:04:34.704 + source /etc/os-release 00:04:34.704 ++ NAME='Fedora Linux' 00:04:34.704 ++ VERSION='39 (Cloud Edition)' 00:04:34.704 ++ ID=fedora 00:04:34.704 ++ VERSION_ID=39 00:04:34.704 ++ VERSION_CODENAME= 00:04:34.704 ++ PLATFORM_ID=platform:f39 00:04:34.704 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:34.704 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:34.704 ++ LOGO=fedora-logo-icon 00:04:34.704 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:34.704 ++ HOME_URL=https://fedoraproject.org/ 00:04:34.704 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:34.704 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:34.704 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:34.704 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:34.704 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:34.704 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:34.704 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:34.704 ++ SUPPORT_END=2024-11-12 00:04:34.704 ++ VARIANT='Cloud Edition' 00:04:34.704 ++ VARIANT_ID=cloud 00:04:34.704 + uname -a 00:04:34.704 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:34.704 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:34.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.963 Hugepages 00:04:34.963 node hugesize free / total 00:04:34.963 node0 1048576kB 0 / 0 00:04:34.963 node0 2048kB 0 / 0 00:04:34.963 00:04:34.963 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.222 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:35.222 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:35.222 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:35.222 + rm -f /tmp/spdk-ld-path 00:04:35.222 + source autorun-spdk.conf 00:04:35.222 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:35.222 ++ SPDK_TEST_NVMF=1 00:04:35.222 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:35.222 ++ SPDK_TEST_URING=1 00:04:35.222 ++ SPDK_TEST_USDT=1 00:04:35.222 ++ SPDK_RUN_UBSAN=1 00:04:35.222 ++ NET_TYPE=virt 00:04:35.222 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:35.222 ++ RUN_NIGHTLY=0 00:04:35.222 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:35.222 + [[ -n '' ]] 00:04:35.222 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:35.222 + for M in /var/spdk/build-*-manifest.txt 00:04:35.222 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:35.222 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:35.222 + for M in /var/spdk/build-*-manifest.txt 00:04:35.222 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:35.222 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:35.222 + for M in /var/spdk/build-*-manifest.txt 00:04:35.222 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:35.222 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:35.222 ++ uname 00:04:35.222 + [[ Linux == \L\i\n\u\x ]] 00:04:35.222 + sudo dmesg -T 00:04:35.222 + sudo dmesg --clear 00:04:35.222 + dmesg_pid=5215 00:04:35.222 + [[ Fedora Linux == FreeBSD ]] 00:04:35.222 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:35.222 + sudo dmesg -Tw 00:04:35.222 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:35.222 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:35.222 + [[ -x /usr/src/fio-static/fio ]] 00:04:35.222 + export FIO_BIN=/usr/src/fio-static/fio 00:04:35.222 + FIO_BIN=/usr/src/fio-static/fio 00:04:35.222 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:35.222 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:35.222 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:35.222 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:35.222 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:35.222 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:35.222 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:35.222 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:35.222 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.222 17:05:36 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:35.222 17:05:36 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:35.222 17:05:36 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:35.222 17:05:36 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:35.222 17:05:36 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:35.482 17:05:36 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:04:35.482 17:05:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.482 17:05:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:35.482 17:05:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:35.482 17:05:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.482 17:05:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.483 17:05:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.483 17:05:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.483 17:05:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.483 17:05:36 -- paths/export.sh@5 -- $ export PATH 00:04:35.483 17:05:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.483 17:05:36 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:35.483 17:05:36 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:35.483 17:05:36 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730739936.XXXXXX 00:04:35.483 17:05:36 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730739936.kYH1oz 00:04:35.483 17:05:36 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:35.483 17:05:36 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:35.483 17:05:36 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:35.483 17:05:36 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:35.483 17:05:36 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:35.483 17:05:36 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:35.483 17:05:36 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:35.483 17:05:36 -- common/autotest_common.sh@10 -- $ set +x 00:04:35.483 17:05:36 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:04:35.483 17:05:36 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:35.483 17:05:36 -- pm/common@17 -- $ local monitor 00:04:35.483 17:05:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.483 17:05:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.483 17:05:36 -- pm/common@25 -- $ sleep 1 00:04:35.483 17:05:36 -- pm/common@21 -- $ date +%s 00:04:35.483 17:05:36 -- pm/common@21 -- $ date +%s 00:04:35.483 17:05:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730739936 00:04:35.483 17:05:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730739936 00:04:35.483 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730739936_collect-cpu-load.pm.log 00:04:35.483 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730739936_collect-vmstat.pm.log 00:04:36.423 17:05:37 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:36.423 17:05:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:36.423 17:05:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:36.423 17:05:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:36.423 17:05:37 -- spdk/autobuild.sh@16 -- $ date -u 00:04:36.423 Mon Nov 4 05:05:37 PM UTC 2024 00:04:36.423 17:05:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:36.423 v25.01-pre-159-g16e58adb1 00:04:36.423 17:05:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:36.423 17:05:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:36.423 17:05:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:36.423 17:05:37 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:36.423 17:05:37 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:36.423 17:05:37 -- common/autotest_common.sh@10 -- $ set +x 00:04:36.423 ************************************ 00:04:36.423 START TEST ubsan 00:04:36.423 ************************************ 00:04:36.423 using ubsan 00:04:36.423 17:05:37 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:04:36.423 00:04:36.423 real 0m0.000s 00:04:36.423 user 0m0.000s 00:04:36.423 sys 0m0.000s 00:04:36.423 17:05:37 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:36.423 17:05:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:36.423 ************************************ 00:04:36.423 END TEST ubsan 00:04:36.423 ************************************ 00:04:36.423 17:05:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:36.423 17:05:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:36.423 17:05:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:36.423 17:05:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:36.423 17:05:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:36.423 17:05:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:36.423 17:05:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:36.423 17:05:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:36.423 17:05:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:04:36.681 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:36.681 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:36.939 Using 'verbs' RDMA provider 00:04:52.750 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:04.996 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:04.996 Creating mk/config.mk...done. 00:05:04.996 Creating mk/cc.flags.mk...done. 00:05:04.996 Type 'make' to build. 00:05:04.996 17:06:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:04.996 17:06:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:04.996 17:06:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:04.996 17:06:04 -- common/autotest_common.sh@10 -- $ set +x 00:05:04.996 ************************************ 00:05:04.996 START TEST make 00:05:04.996 ************************************ 00:05:04.996 17:06:04 make -- common/autotest_common.sh@1127 -- $ make -j10 00:05:04.996 make[1]: Nothing to be done for 'all'. 00:05:17.205 The Meson build system 00:05:17.205 Version: 1.5.0 00:05:17.205 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:17.205 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:17.205 Build type: native build 00:05:17.205 Program cat found: YES (/usr/bin/cat) 00:05:17.205 Project name: DPDK 00:05:17.205 Project version: 24.03.0 00:05:17.205 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:17.205 C linker for the host machine: cc ld.bfd 2.40-14 00:05:17.205 Host machine cpu family: x86_64 00:05:17.205 Host machine cpu: x86_64 00:05:17.205 Message: ## Building in Developer Mode ## 00:05:17.205 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:17.205 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:17.205 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:17.205 Program python3 found: YES (/usr/bin/python3) 00:05:17.206 Program cat found: YES (/usr/bin/cat) 00:05:17.206 Compiler for C supports arguments -march=native: YES 00:05:17.206 Checking for size of "void *" : 8 00:05:17.206 Checking for size of "void *" : 8 (cached) 00:05:17.206 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:17.206 Library m found: YES 00:05:17.206 Library numa found: YES 00:05:17.206 Has header "numaif.h" : YES 00:05:17.206 Library fdt found: NO 00:05:17.206 Library execinfo found: NO 00:05:17.206 Has header "execinfo.h" : YES 00:05:17.206 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:17.206 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:17.206 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:17.206 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:17.206 Run-time dependency openssl found: YES 3.1.1 00:05:17.206 Run-time dependency libpcap found: YES 1.10.4 00:05:17.206 Has header "pcap.h" with dependency libpcap: YES 00:05:17.206 Compiler for C supports arguments -Wcast-qual: YES 00:05:17.206 Compiler for C supports arguments -Wdeprecated: YES 00:05:17.206 Compiler for C supports arguments -Wformat: YES 00:05:17.206 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:17.206 Compiler for C supports arguments -Wformat-security: NO 00:05:17.206 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:17.206 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:17.206 Compiler for C supports arguments -Wnested-externs: YES 00:05:17.206 Compiler for C supports arguments -Wold-style-definition: YES 00:05:17.206 Compiler for C supports arguments -Wpointer-arith: YES 00:05:17.206 Compiler for C supports arguments -Wsign-compare: YES 00:05:17.206 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:17.206 Compiler for C supports arguments -Wundef: YES 00:05:17.206 Compiler for C supports arguments -Wwrite-strings: YES 00:05:17.206 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:17.206 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:17.206 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:17.206 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:17.206 Program objdump found: YES (/usr/bin/objdump) 00:05:17.206 Compiler for C supports arguments -mavx512f: YES 00:05:17.206 Checking if "AVX512 checking" compiles: YES 00:05:17.206 Fetching value of define "__SSE4_2__" : 1 00:05:17.206 Fetching value of define "__AES__" : 1 00:05:17.206 Fetching value of define "__AVX__" : 1 00:05:17.206 Fetching value of define "__AVX2__" : 1 00:05:17.206 Fetching value of define "__AVX512BW__" : (undefined) 00:05:17.206 Fetching value of define "__AVX512CD__" : (undefined) 00:05:17.206 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:17.206 Fetching value of define "__AVX512F__" : (undefined) 00:05:17.206 Fetching value of define "__AVX512VL__" : (undefined) 00:05:17.206 Fetching value of define "__PCLMUL__" : 1 00:05:17.206 Fetching value of define "__RDRND__" : 1 00:05:17.206 Fetching value of define "__RDSEED__" : 1 00:05:17.206 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:17.206 Fetching value of define "__znver1__" : (undefined) 00:05:17.206 Fetching value of define "__znver2__" : (undefined) 00:05:17.206 Fetching value of define "__znver3__" : (undefined) 00:05:17.206 Fetching value of define "__znver4__" : (undefined) 00:05:17.206 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:17.206 Message: lib/log: Defining dependency "log" 00:05:17.206 Message: lib/kvargs: Defining dependency "kvargs" 00:05:17.206 Message: lib/telemetry: Defining dependency "telemetry" 00:05:17.206 Checking for function "getentropy" : NO 00:05:17.206 Message: lib/eal: Defining dependency "eal" 00:05:17.206 Message: lib/ring: Defining dependency "ring" 00:05:17.206 Message: lib/rcu: Defining dependency "rcu" 00:05:17.206 Message: lib/mempool: Defining dependency "mempool" 00:05:17.206 Message: lib/mbuf: Defining dependency "mbuf" 00:05:17.206 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:17.206 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:17.206 Compiler for C supports arguments -mpclmul: YES 00:05:17.206 Compiler for C supports arguments -maes: YES 00:05:17.206 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:17.206 Compiler for C supports arguments -mavx512bw: YES 00:05:17.206 Compiler for C supports arguments -mavx512dq: YES 00:05:17.206 Compiler for C supports arguments -mavx512vl: YES 00:05:17.206 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:17.206 Compiler for C supports arguments -mavx2: YES 00:05:17.206 Compiler for C supports arguments -mavx: YES 00:05:17.206 Message: lib/net: Defining dependency "net" 00:05:17.206 Message: lib/meter: Defining dependency "meter" 00:05:17.206 Message: lib/ethdev: Defining dependency "ethdev" 00:05:17.206 Message: lib/pci: Defining dependency "pci" 00:05:17.206 Message: lib/cmdline: Defining dependency "cmdline" 00:05:17.206 Message: lib/hash: Defining dependency "hash" 00:05:17.206 Message: lib/timer: Defining dependency "timer" 00:05:17.206 Message: lib/compressdev: Defining dependency "compressdev" 00:05:17.206 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:17.206 Message: lib/dmadev: Defining dependency "dmadev" 00:05:17.206 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:17.206 Message: lib/power: Defining dependency "power" 00:05:17.206 Message: lib/reorder: Defining dependency "reorder" 00:05:17.206 Message: lib/security: Defining dependency "security" 00:05:17.206 Has header "linux/userfaultfd.h" : YES 00:05:17.206 Has header "linux/vduse.h" : YES 00:05:17.206 Message: lib/vhost: Defining dependency "vhost" 00:05:17.206 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:17.206 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:17.206 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:17.206 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:17.206 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:17.206 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:17.206 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:17.206 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:17.206 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:17.206 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:17.206 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:17.206 Configuring doxy-api-html.conf using configuration 00:05:17.206 Configuring doxy-api-man.conf using configuration 00:05:17.206 Program mandb found: YES (/usr/bin/mandb) 00:05:17.206 Program sphinx-build found: NO 00:05:17.206 Configuring rte_build_config.h using configuration 00:05:17.206 Message: 00:05:17.206 ================= 00:05:17.206 Applications Enabled 00:05:17.206 ================= 00:05:17.206 00:05:17.206 apps: 00:05:17.206 00:05:17.206 00:05:17.206 Message: 00:05:17.206 ================= 00:05:17.206 Libraries Enabled 00:05:17.206 ================= 00:05:17.206 00:05:17.206 libs: 00:05:17.206 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:17.206 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:17.206 cryptodev, dmadev, power, reorder, security, vhost, 00:05:17.206 00:05:17.206 Message: 00:05:17.206 =============== 00:05:17.206 Drivers Enabled 00:05:17.206 =============== 00:05:17.206 00:05:17.206 common: 00:05:17.206 00:05:17.206 bus: 00:05:17.206 pci, vdev, 00:05:17.206 mempool: 00:05:17.206 ring, 00:05:17.206 dma: 00:05:17.206 00:05:17.206 net: 00:05:17.206 00:05:17.206 crypto: 00:05:17.206 00:05:17.206 compress: 00:05:17.206 00:05:17.206 vdpa: 00:05:17.206 00:05:17.206 00:05:17.206 Message: 00:05:17.206 ================= 00:05:17.206 Content Skipped 00:05:17.206 ================= 00:05:17.206 00:05:17.206 apps: 00:05:17.206 dumpcap: explicitly disabled via build config 00:05:17.206 graph: explicitly disabled via build config 00:05:17.206 pdump: explicitly disabled via build config 00:05:17.206 proc-info: explicitly disabled via build config 00:05:17.206 test-acl: explicitly disabled via build config 00:05:17.206 test-bbdev: explicitly disabled via build config 00:05:17.206 test-cmdline: explicitly disabled via build config 00:05:17.206 test-compress-perf: explicitly disabled via build config 00:05:17.206 test-crypto-perf: explicitly disabled via build config 00:05:17.206 test-dma-perf: explicitly disabled via build config 00:05:17.206 test-eventdev: explicitly disabled via build config 00:05:17.206 test-fib: explicitly disabled via build config 00:05:17.206 test-flow-perf: explicitly disabled via build config 00:05:17.206 test-gpudev: explicitly disabled via build config 00:05:17.206 test-mldev: explicitly disabled via build config 00:05:17.206 test-pipeline: explicitly disabled via build config 00:05:17.206 test-pmd: explicitly disabled via build config 00:05:17.206 test-regex: explicitly disabled via build config 00:05:17.206 test-sad: explicitly disabled via build config 00:05:17.206 test-security-perf: explicitly disabled via build config 00:05:17.206 00:05:17.206 libs: 00:05:17.206 argparse: explicitly disabled via build config 00:05:17.206 metrics: explicitly disabled via build config 00:05:17.206 acl: explicitly disabled via build config 00:05:17.206 bbdev: explicitly disabled via build config 00:05:17.206 bitratestats: explicitly disabled via build config 00:05:17.206 bpf: explicitly disabled via build config 00:05:17.206 cfgfile: explicitly disabled via build config 00:05:17.206 distributor: explicitly disabled via build config 00:05:17.206 efd: explicitly disabled via build config 00:05:17.206 eventdev: explicitly disabled via build config 00:05:17.206 dispatcher: explicitly disabled via build config 00:05:17.206 gpudev: explicitly disabled via build config 00:05:17.206 gro: explicitly disabled via build config 00:05:17.207 gso: explicitly disabled via build config 00:05:17.207 ip_frag: explicitly disabled via build config 00:05:17.207 jobstats: explicitly disabled via build config 00:05:17.207 latencystats: explicitly disabled via build config 00:05:17.207 lpm: explicitly disabled via build config 00:05:17.207 member: explicitly disabled via build config 00:05:17.207 pcapng: explicitly disabled via build config 00:05:17.207 rawdev: explicitly disabled via build config 00:05:17.207 regexdev: explicitly disabled via build config 00:05:17.207 mldev: explicitly disabled via build config 00:05:17.207 rib: explicitly disabled via build config 00:05:17.207 sched: explicitly disabled via build config 00:05:17.207 stack: explicitly disabled via build config 00:05:17.207 ipsec: explicitly disabled via build config 00:05:17.207 pdcp: explicitly disabled via build config 00:05:17.207 fib: explicitly disabled via build config 00:05:17.207 port: explicitly disabled via build config 00:05:17.207 pdump: explicitly disabled via build config 00:05:17.207 table: explicitly disabled via build config 00:05:17.207 pipeline: explicitly disabled via build config 00:05:17.207 graph: explicitly disabled via build config 00:05:17.207 node: explicitly disabled via build config 00:05:17.207 00:05:17.207 drivers: 00:05:17.207 common/cpt: not in enabled drivers build config 00:05:17.207 common/dpaax: not in enabled drivers build config 00:05:17.207 common/iavf: not in enabled drivers build config 00:05:17.207 common/idpf: not in enabled drivers build config 00:05:17.207 common/ionic: not in enabled drivers build config 00:05:17.207 common/mvep: not in enabled drivers build config 00:05:17.207 common/octeontx: not in enabled drivers build config 00:05:17.207 bus/auxiliary: not in enabled drivers build config 00:05:17.207 bus/cdx: not in enabled drivers build config 00:05:17.207 bus/dpaa: not in enabled drivers build config 00:05:17.207 bus/fslmc: not in enabled drivers build config 00:05:17.207 bus/ifpga: not in enabled drivers build config 00:05:17.207 bus/platform: not in enabled drivers build config 00:05:17.207 bus/uacce: not in enabled drivers build config 00:05:17.207 bus/vmbus: not in enabled drivers build config 00:05:17.207 common/cnxk: not in enabled drivers build config 00:05:17.207 common/mlx5: not in enabled drivers build config 00:05:17.207 common/nfp: not in enabled drivers build config 00:05:17.207 common/nitrox: not in enabled drivers build config 00:05:17.207 common/qat: not in enabled drivers build config 00:05:17.207 common/sfc_efx: not in enabled drivers build config 00:05:17.207 mempool/bucket: not in enabled drivers build config 00:05:17.207 mempool/cnxk: not in enabled drivers build config 00:05:17.207 mempool/dpaa: not in enabled drivers build config 00:05:17.207 mempool/dpaa2: not in enabled drivers build config 00:05:17.207 mempool/octeontx: not in enabled drivers build config 00:05:17.207 mempool/stack: not in enabled drivers build config 00:05:17.207 dma/cnxk: not in enabled drivers build config 00:05:17.207 dma/dpaa: not in enabled drivers build config 00:05:17.207 dma/dpaa2: not in enabled drivers build config 00:05:17.207 dma/hisilicon: not in enabled drivers build config 00:05:17.207 dma/idxd: not in enabled drivers build config 00:05:17.207 dma/ioat: not in enabled drivers build config 00:05:17.207 dma/skeleton: not in enabled drivers build config 00:05:17.207 net/af_packet: not in enabled drivers build config 00:05:17.207 net/af_xdp: not in enabled drivers build config 00:05:17.207 net/ark: not in enabled drivers build config 00:05:17.207 net/atlantic: not in enabled drivers build config 00:05:17.207 net/avp: not in enabled drivers build config 00:05:17.207 net/axgbe: not in enabled drivers build config 00:05:17.207 net/bnx2x: not in enabled drivers build config 00:05:17.207 net/bnxt: not in enabled drivers build config 00:05:17.207 net/bonding: not in enabled drivers build config 00:05:17.207 net/cnxk: not in enabled drivers build config 00:05:17.207 net/cpfl: not in enabled drivers build config 00:05:17.207 net/cxgbe: not in enabled drivers build config 00:05:17.207 net/dpaa: not in enabled drivers build config 00:05:17.207 net/dpaa2: not in enabled drivers build config 00:05:17.207 net/e1000: not in enabled drivers build config 00:05:17.207 net/ena: not in enabled drivers build config 00:05:17.207 net/enetc: not in enabled drivers build config 00:05:17.207 net/enetfec: not in enabled drivers build config 00:05:17.207 net/enic: not in enabled drivers build config 00:05:17.207 net/failsafe: not in enabled drivers build config 00:05:17.207 net/fm10k: not in enabled drivers build config 00:05:17.207 net/gve: not in enabled drivers build config 00:05:17.207 net/hinic: not in enabled drivers build config 00:05:17.207 net/hns3: not in enabled drivers build config 00:05:17.207 net/i40e: not in enabled drivers build config 00:05:17.207 net/iavf: not in enabled drivers build config 00:05:17.207 net/ice: not in enabled drivers build config 00:05:17.207 net/idpf: not in enabled drivers build config 00:05:17.207 net/igc: not in enabled drivers build config 00:05:17.207 net/ionic: not in enabled drivers build config 00:05:17.207 net/ipn3ke: not in enabled drivers build config 00:05:17.207 net/ixgbe: not in enabled drivers build config 00:05:17.207 net/mana: not in enabled drivers build config 00:05:17.207 net/memif: not in enabled drivers build config 00:05:17.207 net/mlx4: not in enabled drivers build config 00:05:17.207 net/mlx5: not in enabled drivers build config 00:05:17.207 net/mvneta: not in enabled drivers build config 00:05:17.207 net/mvpp2: not in enabled drivers build config 00:05:17.207 net/netvsc: not in enabled drivers build config 00:05:17.207 net/nfb: not in enabled drivers build config 00:05:17.207 net/nfp: not in enabled drivers build config 00:05:17.207 net/ngbe: not in enabled drivers build config 00:05:17.207 net/null: not in enabled drivers build config 00:05:17.207 net/octeontx: not in enabled drivers build config 00:05:17.207 net/octeon_ep: not in enabled drivers build config 00:05:17.207 net/pcap: not in enabled drivers build config 00:05:17.207 net/pfe: not in enabled drivers build config 00:05:17.207 net/qede: not in enabled drivers build config 00:05:17.207 net/ring: not in enabled drivers build config 00:05:17.207 net/sfc: not in enabled drivers build config 00:05:17.207 net/softnic: not in enabled drivers build config 00:05:17.207 net/tap: not in enabled drivers build config 00:05:17.207 net/thunderx: not in enabled drivers build config 00:05:17.207 net/txgbe: not in enabled drivers build config 00:05:17.207 net/vdev_netvsc: not in enabled drivers build config 00:05:17.207 net/vhost: not in enabled drivers build config 00:05:17.207 net/virtio: not in enabled drivers build config 00:05:17.207 net/vmxnet3: not in enabled drivers build config 00:05:17.207 raw/*: missing internal dependency, "rawdev" 00:05:17.207 crypto/armv8: not in enabled drivers build config 00:05:17.207 crypto/bcmfs: not in enabled drivers build config 00:05:17.207 crypto/caam_jr: not in enabled drivers build config 00:05:17.207 crypto/ccp: not in enabled drivers build config 00:05:17.207 crypto/cnxk: not in enabled drivers build config 00:05:17.207 crypto/dpaa_sec: not in enabled drivers build config 00:05:17.207 crypto/dpaa2_sec: not in enabled drivers build config 00:05:17.207 crypto/ipsec_mb: not in enabled drivers build config 00:05:17.207 crypto/mlx5: not in enabled drivers build config 00:05:17.207 crypto/mvsam: not in enabled drivers build config 00:05:17.207 crypto/nitrox: not in enabled drivers build config 00:05:17.207 crypto/null: not in enabled drivers build config 00:05:17.207 crypto/octeontx: not in enabled drivers build config 00:05:17.207 crypto/openssl: not in enabled drivers build config 00:05:17.207 crypto/scheduler: not in enabled drivers build config 00:05:17.207 crypto/uadk: not in enabled drivers build config 00:05:17.207 crypto/virtio: not in enabled drivers build config 00:05:17.207 compress/isal: not in enabled drivers build config 00:05:17.207 compress/mlx5: not in enabled drivers build config 00:05:17.207 compress/nitrox: not in enabled drivers build config 00:05:17.207 compress/octeontx: not in enabled drivers build config 00:05:17.207 compress/zlib: not in enabled drivers build config 00:05:17.207 regex/*: missing internal dependency, "regexdev" 00:05:17.207 ml/*: missing internal dependency, "mldev" 00:05:17.207 vdpa/ifc: not in enabled drivers build config 00:05:17.207 vdpa/mlx5: not in enabled drivers build config 00:05:17.207 vdpa/nfp: not in enabled drivers build config 00:05:17.207 vdpa/sfc: not in enabled drivers build config 00:05:17.207 event/*: missing internal dependency, "eventdev" 00:05:17.207 baseband/*: missing internal dependency, "bbdev" 00:05:17.207 gpu/*: missing internal dependency, "gpudev" 00:05:17.207 00:05:17.207 00:05:17.207 Build targets in project: 85 00:05:17.207 00:05:17.207 DPDK 24.03.0 00:05:17.207 00:05:17.207 User defined options 00:05:17.207 buildtype : debug 00:05:17.207 default_library : shared 00:05:17.207 libdir : lib 00:05:17.207 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:17.207 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:17.207 c_link_args : 00:05:17.207 cpu_instruction_set: native 00:05:17.207 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:17.207 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:17.207 enable_docs : false 00:05:17.207 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:17.207 enable_kmods : false 00:05:17.207 max_lcores : 128 00:05:17.207 tests : false 00:05:17.207 00:05:17.207 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:17.207 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:17.207 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:17.207 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:17.207 [3/268] Linking static target lib/librte_kvargs.a 00:05:17.207 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:17.207 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:17.207 [6/268] Linking static target lib/librte_log.a 00:05:17.208 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.208 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:17.208 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:17.208 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:17.208 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:17.467 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:17.467 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:17.467 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:17.467 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:17.467 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:17.467 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:17.467 [18/268] Linking static target lib/librte_telemetry.a 00:05:17.727 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.727 [20/268] Linking target lib/librte_log.so.24.1 00:05:17.986 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:18.244 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:18.244 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:18.244 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:18.244 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:18.244 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:18.244 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.503 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:18.503 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:18.503 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:18.503 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:18.503 [32/268] Linking target lib/librte_telemetry.so.24.1 00:05:18.503 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:18.503 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:18.763 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:18.763 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:18.763 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:19.358 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:19.358 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:19.358 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:19.358 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:19.358 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:19.358 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:19.358 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:19.358 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:19.358 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:19.616 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:19.616 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:19.616 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:19.616 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:19.874 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:20.133 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:20.133 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:20.133 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:20.392 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:20.392 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:20.392 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:20.650 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:20.650 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:20.650 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:20.650 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:20.650 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:21.217 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:21.217 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:21.217 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:21.217 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:21.475 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:21.475 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:21.475 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:21.475 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:21.475 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:21.475 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:21.734 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:21.734 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:21.992 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:21.992 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:22.250 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:22.250 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:22.250 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:22.250 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:22.250 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:22.509 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:22.509 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:22.509 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:22.768 [85/268] Linking static target lib/librte_ring.a 00:05:22.768 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:22.768 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:22.768 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:22.768 [89/268] Linking static target lib/librte_eal.a 00:05:23.026 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:23.026 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:23.026 [92/268] Linking static target lib/librte_rcu.a 00:05:23.026 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.284 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:23.284 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:23.284 [96/268] Linking static target lib/librte_mempool.a 00:05:23.284 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:23.284 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:23.284 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:23.284 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:23.543 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:23.543 [102/268] Linking static target lib/librte_mbuf.a 00:05:23.543 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.801 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:23.801 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:23.801 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:24.100 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:24.100 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:24.100 [109/268] Linking static target lib/librte_net.a 00:05:24.101 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:24.101 [111/268] Linking static target lib/librte_meter.a 00:05:24.359 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:24.618 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:24.618 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.618 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.618 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:24.618 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.618 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.618 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:24.876 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:25.134 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:25.392 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:25.650 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:25.650 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:25.650 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:25.650 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:25.650 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:25.650 [128/268] Linking static target lib/librte_pci.a 00:05:25.650 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:25.650 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:25.909 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:25.909 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:25.909 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:25.909 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:25.909 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:25.909 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:26.167 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:26.167 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:26.167 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.167 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:26.167 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:26.167 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:26.167 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:26.167 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:26.426 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:26.426 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:26.426 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:26.426 [148/268] Linking static target lib/librte_cmdline.a 00:05:26.426 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:26.685 [150/268] Linking static target lib/librte_ethdev.a 00:05:26.685 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:26.943 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:26.943 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:26.943 [154/268] Linking static target lib/librte_timer.a 00:05:26.943 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:26.943 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:27.202 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:27.460 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:27.460 [159/268] Linking static target lib/librte_hash.a 00:05:27.460 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:27.460 [161/268] Linking static target lib/librte_compressdev.a 00:05:27.718 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:27.718 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:27.718 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:27.718 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:27.718 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:27.976 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:28.234 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.234 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:28.234 [170/268] Linking static target lib/librte_dmadev.a 00:05:28.234 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:28.234 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:28.234 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:28.507 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.507 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:28.507 [176/268] Linking static target lib/librte_cryptodev.a 00:05:28.507 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:28.776 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:28.776 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:28.776 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:29.035 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:29.035 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:29.035 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:29.035 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.295 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:29.295 [186/268] Linking static target lib/librte_power.a 00:05:29.552 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:29.552 [188/268] Linking static target lib/librte_reorder.a 00:05:29.552 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:29.810 [190/268] Linking static target lib/librte_security.a 00:05:29.810 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:29.810 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:29.810 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:30.069 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:30.069 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.635 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.635 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.635 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:30.635 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:30.635 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:30.893 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:31.151 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.151 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:31.151 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:31.410 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:31.410 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:31.668 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:31.668 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:31.668 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:31.668 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:31.668 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:31.668 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:31.926 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:31.926 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:31.926 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:31.926 [216/268] Linking static target drivers/librte_bus_pci.a 00:05:31.926 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:31.926 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:31.926 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:31.926 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:31.926 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:31.926 [222/268] Linking static target drivers/librte_bus_vdev.a 00:05:32.185 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:32.185 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:32.185 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:32.185 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:32.185 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.443 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.407 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:33.407 [230/268] Linking static target lib/librte_vhost.a 00:05:33.974 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.974 [232/268] Linking target lib/librte_eal.so.24.1 00:05:34.232 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:34.232 [234/268] Linking target lib/librte_meter.so.24.1 00:05:34.232 [235/268] Linking target lib/librte_ring.so.24.1 00:05:34.232 [236/268] Linking target lib/librte_pci.so.24.1 00:05:34.233 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:34.233 [238/268] Linking target lib/librte_timer.so.24.1 00:05:34.233 [239/268] Linking target lib/librte_dmadev.so.24.1 00:05:34.233 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:34.233 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:34.233 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:34.233 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:34.491 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:34.491 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:34.491 [246/268] Linking target lib/librte_mempool.so.24.1 00:05:34.491 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:34.491 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.491 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:34.492 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:34.492 [251/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.492 [252/268] Linking target lib/librte_mbuf.so.24.1 00:05:34.492 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:34.750 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:34.750 [255/268] Linking target lib/librte_net.so.24.1 00:05:34.751 [256/268] Linking target lib/librte_reorder.so.24.1 00:05:34.751 [257/268] Linking target lib/librte_compressdev.so.24.1 00:05:34.751 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:05:34.751 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:34.751 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:35.010 [261/268] Linking target lib/librte_hash.so.24.1 00:05:35.010 [262/268] Linking target lib/librte_cmdline.so.24.1 00:05:35.010 [263/268] Linking target lib/librte_security.so.24.1 00:05:35.010 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:35.010 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:35.010 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:35.010 [267/268] Linking target lib/librte_power.so.24.1 00:05:35.268 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:35.268 INFO: autodetecting backend as ninja 00:05:35.268 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:02.006 CC lib/log/log.o 00:06:02.006 CC lib/log/log_flags.o 00:06:02.006 CC lib/log/log_deprecated.o 00:06:02.006 CC lib/ut_mock/mock.o 00:06:02.006 CC lib/ut/ut.o 00:06:02.006 LIB libspdk_ut_mock.a 00:06:02.006 LIB libspdk_ut.a 00:06:02.006 LIB libspdk_log.a 00:06:02.006 SO libspdk_ut_mock.so.6.0 00:06:02.006 SO libspdk_ut.so.2.0 00:06:02.006 SO libspdk_log.so.7.1 00:06:02.006 SYMLINK libspdk_ut_mock.so 00:06:02.006 SYMLINK libspdk_ut.so 00:06:02.006 SYMLINK libspdk_log.so 00:06:02.265 CC lib/ioat/ioat.o 00:06:02.265 CC lib/dma/dma.o 00:06:02.265 CXX lib/trace_parser/trace.o 00:06:02.265 CC lib/util/base64.o 00:06:02.265 CC lib/util/bit_array.o 00:06:02.265 CC lib/util/cpuset.o 00:06:02.265 CC lib/util/crc16.o 00:06:02.265 CC lib/util/crc32.o 00:06:02.265 CC lib/util/crc32c.o 00:06:02.265 CC lib/vfio_user/host/vfio_user_pci.o 00:06:02.265 CC lib/util/crc32_ieee.o 00:06:02.524 CC lib/util/crc64.o 00:06:02.524 CC lib/util/dif.o 00:06:02.524 LIB libspdk_dma.a 00:06:02.524 CC lib/util/fd.o 00:06:02.524 CC lib/vfio_user/host/vfio_user.o 00:06:02.524 SO libspdk_dma.so.5.0 00:06:02.524 CC lib/util/fd_group.o 00:06:02.524 CC lib/util/file.o 00:06:02.524 SYMLINK libspdk_dma.so 00:06:02.524 CC lib/util/hexlify.o 00:06:02.524 LIB libspdk_ioat.a 00:06:02.524 CC lib/util/iov.o 00:06:02.524 SO libspdk_ioat.so.7.0 00:06:02.783 CC lib/util/math.o 00:06:02.783 CC lib/util/net.o 00:06:02.783 SYMLINK libspdk_ioat.so 00:06:02.783 LIB libspdk_vfio_user.a 00:06:02.783 CC lib/util/pipe.o 00:06:02.783 CC lib/util/strerror_tls.o 00:06:02.783 CC lib/util/string.o 00:06:02.783 SO libspdk_vfio_user.so.5.0 00:06:02.783 CC lib/util/uuid.o 00:06:02.783 CC lib/util/xor.o 00:06:02.783 SYMLINK libspdk_vfio_user.so 00:06:02.783 CC lib/util/zipf.o 00:06:02.783 CC lib/util/md5.o 00:06:03.042 LIB libspdk_util.a 00:06:03.300 SO libspdk_util.so.10.1 00:06:03.300 LIB libspdk_trace_parser.a 00:06:03.300 SO libspdk_trace_parser.so.6.0 00:06:03.300 SYMLINK libspdk_util.so 00:06:03.300 SYMLINK libspdk_trace_parser.so 00:06:03.559 CC lib/vmd/vmd.o 00:06:03.559 CC lib/idxd/idxd.o 00:06:03.559 CC lib/rdma_utils/rdma_utils.o 00:06:03.559 CC lib/vmd/led.o 00:06:03.559 CC lib/idxd/idxd_user.o 00:06:03.559 CC lib/conf/conf.o 00:06:03.559 CC lib/env_dpdk/env.o 00:06:03.559 CC lib/idxd/idxd_kernel.o 00:06:03.559 CC lib/rdma_provider/common.o 00:06:03.559 CC lib/json/json_parse.o 00:06:03.818 CC lib/json/json_util.o 00:06:03.818 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:03.818 CC lib/env_dpdk/memory.o 00:06:03.818 CC lib/env_dpdk/pci.o 00:06:03.818 CC lib/json/json_write.o 00:06:03.818 LIB libspdk_conf.a 00:06:03.818 SO libspdk_conf.so.6.0 00:06:03.818 LIB libspdk_rdma_utils.a 00:06:03.818 SO libspdk_rdma_utils.so.1.0 00:06:03.818 SYMLINK libspdk_conf.so 00:06:03.818 CC lib/env_dpdk/init.o 00:06:03.818 LIB libspdk_rdma_provider.a 00:06:03.818 SYMLINK libspdk_rdma_utils.so 00:06:03.818 CC lib/env_dpdk/threads.o 00:06:04.076 SO libspdk_rdma_provider.so.6.0 00:06:04.076 CC lib/env_dpdk/pci_ioat.o 00:06:04.076 SYMLINK libspdk_rdma_provider.so 00:06:04.076 CC lib/env_dpdk/pci_virtio.o 00:06:04.076 LIB libspdk_json.a 00:06:04.076 CC lib/env_dpdk/pci_vmd.o 00:06:04.076 SO libspdk_json.so.6.0 00:06:04.076 CC lib/env_dpdk/pci_idxd.o 00:06:04.076 CC lib/env_dpdk/pci_event.o 00:06:04.076 LIB libspdk_vmd.a 00:06:04.076 CC lib/env_dpdk/sigbus_handler.o 00:06:04.076 LIB libspdk_idxd.a 00:06:04.335 SYMLINK libspdk_json.so 00:06:04.335 SO libspdk_vmd.so.6.0 00:06:04.335 SO libspdk_idxd.so.12.1 00:06:04.335 CC lib/env_dpdk/pci_dpdk.o 00:06:04.335 SYMLINK libspdk_vmd.so 00:06:04.335 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:04.335 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:04.335 SYMLINK libspdk_idxd.so 00:06:04.335 CC lib/jsonrpc/jsonrpc_server.o 00:06:04.335 CC lib/jsonrpc/jsonrpc_client.o 00:06:04.335 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:04.335 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:04.593 LIB libspdk_jsonrpc.a 00:06:04.593 SO libspdk_jsonrpc.so.6.0 00:06:04.852 SYMLINK libspdk_jsonrpc.so 00:06:05.110 CC lib/rpc/rpc.o 00:06:05.110 LIB libspdk_env_dpdk.a 00:06:05.110 SO libspdk_env_dpdk.so.15.1 00:06:05.368 SYMLINK libspdk_env_dpdk.so 00:06:05.368 LIB libspdk_rpc.a 00:06:05.368 SO libspdk_rpc.so.6.0 00:06:05.368 SYMLINK libspdk_rpc.so 00:06:05.626 CC lib/notify/notify_rpc.o 00:06:05.626 CC lib/notify/notify.o 00:06:05.626 CC lib/trace/trace.o 00:06:05.626 CC lib/trace/trace_flags.o 00:06:05.626 CC lib/trace/trace_rpc.o 00:06:05.626 CC lib/keyring/keyring.o 00:06:05.626 CC lib/keyring/keyring_rpc.o 00:06:05.885 LIB libspdk_notify.a 00:06:05.885 SO libspdk_notify.so.6.0 00:06:05.885 SYMLINK libspdk_notify.so 00:06:05.885 LIB libspdk_keyring.a 00:06:05.885 SO libspdk_keyring.so.2.0 00:06:05.885 LIB libspdk_trace.a 00:06:06.144 SO libspdk_trace.so.11.0 00:06:06.144 SYMLINK libspdk_keyring.so 00:06:06.144 SYMLINK libspdk_trace.so 00:06:06.403 CC lib/thread/iobuf.o 00:06:06.403 CC lib/thread/thread.o 00:06:06.403 CC lib/sock/sock.o 00:06:06.403 CC lib/sock/sock_rpc.o 00:06:06.970 LIB libspdk_sock.a 00:06:06.970 SO libspdk_sock.so.10.0 00:06:06.970 SYMLINK libspdk_sock.so 00:06:07.227 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:07.227 CC lib/nvme/nvme_ns_cmd.o 00:06:07.227 CC lib/nvme/nvme_ctrlr.o 00:06:07.227 CC lib/nvme/nvme_ns.o 00:06:07.227 CC lib/nvme/nvme_fabric.o 00:06:07.227 CC lib/nvme/nvme_qpair.o 00:06:07.227 CC lib/nvme/nvme_pcie_common.o 00:06:07.227 CC lib/nvme/nvme_pcie.o 00:06:07.227 CC lib/nvme/nvme.o 00:06:08.163 CC lib/nvme/nvme_quirks.o 00:06:08.163 LIB libspdk_thread.a 00:06:08.163 CC lib/nvme/nvme_transport.o 00:06:08.163 SO libspdk_thread.so.11.0 00:06:08.163 CC lib/nvme/nvme_discovery.o 00:06:08.163 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:08.163 SYMLINK libspdk_thread.so 00:06:08.163 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:08.163 CC lib/nvme/nvme_tcp.o 00:06:08.163 CC lib/nvme/nvme_opal.o 00:06:08.163 CC lib/nvme/nvme_io_msg.o 00:06:08.422 CC lib/nvme/nvme_poll_group.o 00:06:08.680 CC lib/nvme/nvme_zns.o 00:06:08.680 CC lib/nvme/nvme_stubs.o 00:06:08.680 CC lib/nvme/nvme_auth.o 00:06:08.944 CC lib/nvme/nvme_cuse.o 00:06:08.944 CC lib/nvme/nvme_rdma.o 00:06:08.944 CC lib/accel/accel.o 00:06:09.204 CC lib/blob/blobstore.o 00:06:09.204 CC lib/init/json_config.o 00:06:09.463 CC lib/virtio/virtio.o 00:06:09.463 CC lib/fsdev/fsdev.o 00:06:09.463 CC lib/init/subsystem.o 00:06:09.721 CC lib/virtio/virtio_vhost_user.o 00:06:09.721 CC lib/fsdev/fsdev_io.o 00:06:09.721 CC lib/fsdev/fsdev_rpc.o 00:06:09.721 CC lib/init/subsystem_rpc.o 00:06:09.980 CC lib/init/rpc.o 00:06:09.980 CC lib/accel/accel_rpc.o 00:06:09.980 CC lib/virtio/virtio_vfio_user.o 00:06:09.980 CC lib/virtio/virtio_pci.o 00:06:09.980 CC lib/blob/request.o 00:06:09.980 LIB libspdk_init.a 00:06:09.980 CC lib/accel/accel_sw.o 00:06:10.239 SO libspdk_init.so.6.0 00:06:10.239 CC lib/blob/zeroes.o 00:06:10.239 LIB libspdk_fsdev.a 00:06:10.239 CC lib/blob/blob_bs_dev.o 00:06:10.239 SYMLINK libspdk_init.so 00:06:10.239 SO libspdk_fsdev.so.2.0 00:06:10.239 SYMLINK libspdk_fsdev.so 00:06:10.239 LIB libspdk_virtio.a 00:06:10.239 SO libspdk_virtio.so.7.0 00:06:10.239 LIB libspdk_nvme.a 00:06:10.239 CC lib/event/app.o 00:06:10.239 CC lib/event/reactor.o 00:06:10.239 CC lib/event/log_rpc.o 00:06:10.239 CC lib/event/app_rpc.o 00:06:10.498 SYMLINK libspdk_virtio.so 00:06:10.498 CC lib/event/scheduler_static.o 00:06:10.498 LIB libspdk_accel.a 00:06:10.498 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:10.498 SO libspdk_accel.so.16.0 00:06:10.498 SYMLINK libspdk_accel.so 00:06:10.498 SO libspdk_nvme.so.15.0 00:06:10.757 CC lib/bdev/bdev.o 00:06:10.757 CC lib/bdev/bdev_rpc.o 00:06:10.757 CC lib/bdev/scsi_nvme.o 00:06:10.757 CC lib/bdev/part.o 00:06:10.757 CC lib/bdev/bdev_zone.o 00:06:10.757 SYMLINK libspdk_nvme.so 00:06:10.757 LIB libspdk_event.a 00:06:10.757 SO libspdk_event.so.14.0 00:06:11.015 SYMLINK libspdk_event.so 00:06:11.015 LIB libspdk_fuse_dispatcher.a 00:06:11.015 SO libspdk_fuse_dispatcher.so.1.0 00:06:11.274 SYMLINK libspdk_fuse_dispatcher.so 00:06:12.212 LIB libspdk_blob.a 00:06:12.212 SO libspdk_blob.so.11.0 00:06:12.471 SYMLINK libspdk_blob.so 00:06:12.730 CC lib/lvol/lvol.o 00:06:12.730 CC lib/blobfs/tree.o 00:06:12.730 CC lib/blobfs/blobfs.o 00:06:13.666 LIB libspdk_bdev.a 00:06:13.666 LIB libspdk_blobfs.a 00:06:13.666 SO libspdk_bdev.so.17.0 00:06:13.666 SO libspdk_blobfs.so.10.0 00:06:13.666 SYMLINK libspdk_bdev.so 00:06:13.666 LIB libspdk_lvol.a 00:06:13.666 SYMLINK libspdk_blobfs.so 00:06:13.666 SO libspdk_lvol.so.10.0 00:06:13.666 SYMLINK libspdk_lvol.so 00:06:13.666 CC lib/nbd/nbd.o 00:06:13.666 CC lib/nbd/nbd_rpc.o 00:06:13.666 CC lib/ublk/ublk.o 00:06:13.666 CC lib/ublk/ublk_rpc.o 00:06:13.666 CC lib/scsi/dev.o 00:06:13.666 CC lib/scsi/lun.o 00:06:13.666 CC lib/nvmf/ctrlr.o 00:06:13.666 CC lib/nvmf/ctrlr_discovery.o 00:06:13.666 CC lib/nvmf/ctrlr_bdev.o 00:06:13.666 CC lib/ftl/ftl_core.o 00:06:13.924 CC lib/nvmf/subsystem.o 00:06:13.924 CC lib/nvmf/nvmf.o 00:06:14.183 CC lib/scsi/port.o 00:06:14.183 CC lib/nvmf/nvmf_rpc.o 00:06:14.183 LIB libspdk_nbd.a 00:06:14.183 SO libspdk_nbd.so.7.0 00:06:14.183 CC lib/ftl/ftl_init.o 00:06:14.183 CC lib/scsi/scsi.o 00:06:14.183 SYMLINK libspdk_nbd.so 00:06:14.183 CC lib/scsi/scsi_bdev.o 00:06:14.183 CC lib/ftl/ftl_layout.o 00:06:14.442 CC lib/ftl/ftl_debug.o 00:06:14.442 LIB libspdk_ublk.a 00:06:14.442 CC lib/ftl/ftl_io.o 00:06:14.442 SO libspdk_ublk.so.3.0 00:06:14.442 CC lib/nvmf/transport.o 00:06:14.442 SYMLINK libspdk_ublk.so 00:06:14.442 CC lib/nvmf/tcp.o 00:06:14.700 CC lib/nvmf/stubs.o 00:06:14.701 CC lib/ftl/ftl_sb.o 00:06:14.701 CC lib/scsi/scsi_pr.o 00:06:14.701 CC lib/scsi/scsi_rpc.o 00:06:14.961 CC lib/ftl/ftl_l2p.o 00:06:14.961 CC lib/scsi/task.o 00:06:14.961 CC lib/nvmf/mdns_server.o 00:06:14.961 CC lib/ftl/ftl_l2p_flat.o 00:06:14.961 CC lib/ftl/ftl_nv_cache.o 00:06:14.961 CC lib/ftl/ftl_band.o 00:06:15.220 CC lib/nvmf/rdma.o 00:06:15.220 LIB libspdk_scsi.a 00:06:15.220 CC lib/nvmf/auth.o 00:06:15.220 CC lib/ftl/ftl_band_ops.o 00:06:15.220 SO libspdk_scsi.so.9.0 00:06:15.220 CC lib/ftl/ftl_writer.o 00:06:15.220 SYMLINK libspdk_scsi.so 00:06:15.220 CC lib/ftl/ftl_rq.o 00:06:15.479 CC lib/ftl/ftl_reloc.o 00:06:15.479 CC lib/ftl/ftl_l2p_cache.o 00:06:15.479 CC lib/ftl/ftl_p2l.o 00:06:15.479 CC lib/ftl/ftl_p2l_log.o 00:06:15.736 CC lib/iscsi/conn.o 00:06:15.736 CC lib/vhost/vhost.o 00:06:15.737 CC lib/ftl/mngt/ftl_mngt.o 00:06:15.995 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:15.995 CC lib/iscsi/init_grp.o 00:06:15.995 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:15.995 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:15.995 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:15.995 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:15.995 CC lib/iscsi/iscsi.o 00:06:16.253 CC lib/iscsi/param.o 00:06:16.253 CC lib/iscsi/portal_grp.o 00:06:16.253 CC lib/iscsi/tgt_node.o 00:06:16.253 CC lib/iscsi/iscsi_subsystem.o 00:06:16.253 CC lib/iscsi/iscsi_rpc.o 00:06:16.253 CC lib/iscsi/task.o 00:06:16.253 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:16.512 CC lib/vhost/vhost_rpc.o 00:06:16.512 CC lib/vhost/vhost_scsi.o 00:06:16.512 CC lib/vhost/vhost_blk.o 00:06:16.512 CC lib/vhost/rte_vhost_user.o 00:06:16.512 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:16.773 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:16.773 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:16.773 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:16.773 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:17.042 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:17.042 CC lib/ftl/utils/ftl_conf.o 00:06:17.042 CC lib/ftl/utils/ftl_md.o 00:06:17.042 LIB libspdk_nvmf.a 00:06:17.042 CC lib/ftl/utils/ftl_mempool.o 00:06:17.300 CC lib/ftl/utils/ftl_bitmap.o 00:06:17.300 CC lib/ftl/utils/ftl_property.o 00:06:17.300 SO libspdk_nvmf.so.20.0 00:06:17.300 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:17.300 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:17.300 SYMLINK libspdk_nvmf.so 00:06:17.300 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:17.300 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:17.558 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:17.558 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:17.558 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:17.558 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:17.558 LIB libspdk_iscsi.a 00:06:17.558 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:17.558 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:17.558 SO libspdk_iscsi.so.8.0 00:06:17.558 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:17.558 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:17.558 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:17.558 CC lib/ftl/base/ftl_base_dev.o 00:06:17.558 CC lib/ftl/base/ftl_base_bdev.o 00:06:17.817 LIB libspdk_vhost.a 00:06:17.817 CC lib/ftl/ftl_trace.o 00:06:17.817 SYMLINK libspdk_iscsi.so 00:06:17.817 SO libspdk_vhost.so.8.0 00:06:17.817 SYMLINK libspdk_vhost.so 00:06:18.075 LIB libspdk_ftl.a 00:06:18.334 SO libspdk_ftl.so.9.0 00:06:18.593 SYMLINK libspdk_ftl.so 00:06:18.851 CC module/env_dpdk/env_dpdk_rpc.o 00:06:18.851 CC module/blob/bdev/blob_bdev.o 00:06:18.851 CC module/sock/posix/posix.o 00:06:18.851 CC module/accel/dsa/accel_dsa.o 00:06:18.851 CC module/sock/uring/uring.o 00:06:18.851 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:18.851 CC module/accel/error/accel_error.o 00:06:18.851 CC module/fsdev/aio/fsdev_aio.o 00:06:18.851 CC module/accel/ioat/accel_ioat.o 00:06:18.851 CC module/keyring/file/keyring.o 00:06:18.851 LIB libspdk_env_dpdk_rpc.a 00:06:19.109 SO libspdk_env_dpdk_rpc.so.6.0 00:06:19.109 CC module/keyring/file/keyring_rpc.o 00:06:19.109 SYMLINK libspdk_env_dpdk_rpc.so 00:06:19.109 CC module/accel/error/accel_error_rpc.o 00:06:19.109 LIB libspdk_scheduler_dynamic.a 00:06:19.109 CC module/accel/ioat/accel_ioat_rpc.o 00:06:19.109 SO libspdk_scheduler_dynamic.so.4.0 00:06:19.109 LIB libspdk_blob_bdev.a 00:06:19.109 CC module/accel/dsa/accel_dsa_rpc.o 00:06:19.109 SO libspdk_blob_bdev.so.11.0 00:06:19.367 SYMLINK libspdk_scheduler_dynamic.so 00:06:19.367 LIB libspdk_accel_error.a 00:06:19.367 LIB libspdk_keyring_file.a 00:06:19.367 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:19.367 SO libspdk_accel_error.so.2.0 00:06:19.367 SO libspdk_keyring_file.so.2.0 00:06:19.367 SYMLINK libspdk_blob_bdev.so 00:06:19.367 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:19.367 LIB libspdk_accel_ioat.a 00:06:19.367 SYMLINK libspdk_accel_error.so 00:06:19.367 SYMLINK libspdk_keyring_file.so 00:06:19.367 SO libspdk_accel_ioat.so.6.0 00:06:19.367 LIB libspdk_accel_dsa.a 00:06:19.367 SO libspdk_accel_dsa.so.5.0 00:06:19.367 SYMLINK libspdk_accel_ioat.so 00:06:19.367 LIB libspdk_scheduler_dpdk_governor.a 00:06:19.367 CC module/scheduler/gscheduler/gscheduler.o 00:06:19.625 CC module/fsdev/aio/linux_aio_mgr.o 00:06:19.625 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:19.625 SYMLINK libspdk_accel_dsa.so 00:06:19.625 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:19.625 CC module/keyring/linux/keyring.o 00:06:19.625 CC module/keyring/linux/keyring_rpc.o 00:06:19.625 LIB libspdk_sock_uring.a 00:06:19.625 LIB libspdk_scheduler_gscheduler.a 00:06:19.625 LIB libspdk_sock_posix.a 00:06:19.625 CC module/bdev/delay/vbdev_delay.o 00:06:19.625 SO libspdk_sock_uring.so.5.0 00:06:19.625 SO libspdk_scheduler_gscheduler.so.4.0 00:06:19.625 CC module/accel/iaa/accel_iaa.o 00:06:19.625 LIB libspdk_fsdev_aio.a 00:06:19.625 SO libspdk_sock_posix.so.6.0 00:06:19.625 SO libspdk_fsdev_aio.so.1.0 00:06:19.884 CC module/blobfs/bdev/blobfs_bdev.o 00:06:19.884 SYMLINK libspdk_sock_uring.so 00:06:19.884 SYMLINK libspdk_scheduler_gscheduler.so 00:06:19.884 CC module/accel/iaa/accel_iaa_rpc.o 00:06:19.884 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:19.884 CC module/bdev/error/vbdev_error.o 00:06:19.884 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:19.884 LIB libspdk_keyring_linux.a 00:06:19.884 SYMLINK libspdk_sock_posix.so 00:06:19.884 CC module/bdev/error/vbdev_error_rpc.o 00:06:19.884 SO libspdk_keyring_linux.so.1.0 00:06:19.884 SYMLINK libspdk_fsdev_aio.so 00:06:19.884 SYMLINK libspdk_keyring_linux.so 00:06:19.884 LIB libspdk_accel_iaa.a 00:06:19.884 LIB libspdk_blobfs_bdev.a 00:06:19.884 SO libspdk_accel_iaa.so.3.0 00:06:19.884 SO libspdk_blobfs_bdev.so.6.0 00:06:20.142 CC module/bdev/gpt/gpt.o 00:06:20.142 CC module/bdev/gpt/vbdev_gpt.o 00:06:20.142 SYMLINK libspdk_accel_iaa.so 00:06:20.142 LIB libspdk_bdev_error.a 00:06:20.142 CC module/bdev/lvol/vbdev_lvol.o 00:06:20.142 SYMLINK libspdk_blobfs_bdev.so 00:06:20.142 LIB libspdk_bdev_delay.a 00:06:20.142 SO libspdk_bdev_error.so.6.0 00:06:20.142 CC module/bdev/malloc/bdev_malloc.o 00:06:20.142 SO libspdk_bdev_delay.so.6.0 00:06:20.142 CC module/bdev/nvme/bdev_nvme.o 00:06:20.142 CC module/bdev/null/bdev_null.o 00:06:20.142 SYMLINK libspdk_bdev_error.so 00:06:20.142 SYMLINK libspdk_bdev_delay.so 00:06:20.142 CC module/bdev/null/bdev_null_rpc.o 00:06:20.142 CC module/bdev/passthru/vbdev_passthru.o 00:06:20.142 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:20.142 CC module/bdev/raid/bdev_raid.o 00:06:20.400 LIB libspdk_bdev_gpt.a 00:06:20.400 SO libspdk_bdev_gpt.so.6.0 00:06:20.400 CC module/bdev/split/vbdev_split.o 00:06:20.400 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:20.400 CC module/bdev/raid/bdev_raid_rpc.o 00:06:20.400 SYMLINK libspdk_bdev_gpt.so 00:06:20.400 LIB libspdk_bdev_null.a 00:06:20.400 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:20.400 SO libspdk_bdev_null.so.6.0 00:06:20.400 LIB libspdk_bdev_passthru.a 00:06:20.658 SO libspdk_bdev_passthru.so.6.0 00:06:20.658 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:20.658 SYMLINK libspdk_bdev_null.so 00:06:20.658 CC module/bdev/raid/bdev_raid_sb.o 00:06:20.658 CC module/bdev/split/vbdev_split_rpc.o 00:06:20.658 SYMLINK libspdk_bdev_passthru.so 00:06:20.658 CC module/bdev/raid/raid0.o 00:06:20.658 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:20.658 LIB libspdk_bdev_malloc.a 00:06:20.658 SO libspdk_bdev_malloc.so.6.0 00:06:20.658 LIB libspdk_bdev_lvol.a 00:06:20.658 CC module/bdev/uring/bdev_uring.o 00:06:20.915 SO libspdk_bdev_lvol.so.6.0 00:06:20.915 LIB libspdk_bdev_split.a 00:06:20.915 SYMLINK libspdk_bdev_malloc.so 00:06:20.915 CC module/bdev/uring/bdev_uring_rpc.o 00:06:20.915 SO libspdk_bdev_split.so.6.0 00:06:20.915 SYMLINK libspdk_bdev_lvol.so 00:06:20.915 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:20.915 CC module/bdev/raid/raid1.o 00:06:20.915 SYMLINK libspdk_bdev_split.so 00:06:20.915 CC module/bdev/raid/concat.o 00:06:20.915 CC module/bdev/nvme/nvme_rpc.o 00:06:20.915 LIB libspdk_bdev_zone_block.a 00:06:21.173 SO libspdk_bdev_zone_block.so.6.0 00:06:21.173 CC module/bdev/aio/bdev_aio.o 00:06:21.173 LIB libspdk_bdev_uring.a 00:06:21.173 SYMLINK libspdk_bdev_zone_block.so 00:06:21.173 SO libspdk_bdev_uring.so.6.0 00:06:21.174 CC module/bdev/ftl/bdev_ftl.o 00:06:21.174 CC module/bdev/aio/bdev_aio_rpc.o 00:06:21.174 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:21.174 SYMLINK libspdk_bdev_uring.so 00:06:21.174 CC module/bdev/nvme/bdev_mdns_client.o 00:06:21.174 CC module/bdev/nvme/vbdev_opal.o 00:06:21.174 LIB libspdk_bdev_raid.a 00:06:21.431 SO libspdk_bdev_raid.so.6.0 00:06:21.431 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:21.431 CC module/bdev/iscsi/bdev_iscsi.o 00:06:21.431 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:21.431 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:21.431 SYMLINK libspdk_bdev_raid.so 00:06:21.431 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:21.431 LIB libspdk_bdev_aio.a 00:06:21.431 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:21.431 SO libspdk_bdev_aio.so.6.0 00:06:21.431 LIB libspdk_bdev_ftl.a 00:06:21.431 SO libspdk_bdev_ftl.so.6.0 00:06:21.690 SYMLINK libspdk_bdev_aio.so 00:06:21.690 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:21.690 SYMLINK libspdk_bdev_ftl.so 00:06:21.690 LIB libspdk_bdev_iscsi.a 00:06:21.949 SO libspdk_bdev_iscsi.so.6.0 00:06:21.949 SYMLINK libspdk_bdev_iscsi.so 00:06:21.949 LIB libspdk_bdev_virtio.a 00:06:21.949 SO libspdk_bdev_virtio.so.6.0 00:06:21.949 SYMLINK libspdk_bdev_virtio.so 00:06:22.937 LIB libspdk_bdev_nvme.a 00:06:22.937 SO libspdk_bdev_nvme.so.7.1 00:06:22.937 SYMLINK libspdk_bdev_nvme.so 00:06:23.197 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:23.197 CC module/event/subsystems/keyring/keyring.o 00:06:23.197 CC module/event/subsystems/sock/sock.o 00:06:23.197 CC module/event/subsystems/iobuf/iobuf.o 00:06:23.197 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:23.197 CC module/event/subsystems/vmd/vmd.o 00:06:23.197 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:23.456 CC module/event/subsystems/scheduler/scheduler.o 00:06:23.456 CC module/event/subsystems/fsdev/fsdev.o 00:06:23.456 LIB libspdk_event_scheduler.a 00:06:23.456 LIB libspdk_event_keyring.a 00:06:23.456 LIB libspdk_event_vhost_blk.a 00:06:23.456 LIB libspdk_event_fsdev.a 00:06:23.456 LIB libspdk_event_sock.a 00:06:23.456 SO libspdk_event_scheduler.so.4.0 00:06:23.456 SO libspdk_event_keyring.so.1.0 00:06:23.456 SO libspdk_event_fsdev.so.1.0 00:06:23.456 SO libspdk_event_vhost_blk.so.3.0 00:06:23.456 LIB libspdk_event_vmd.a 00:06:23.456 LIB libspdk_event_iobuf.a 00:06:23.456 SO libspdk_event_sock.so.5.0 00:06:23.456 SO libspdk_event_vmd.so.6.0 00:06:23.456 SO libspdk_event_iobuf.so.3.0 00:06:23.456 SYMLINK libspdk_event_scheduler.so 00:06:23.456 SYMLINK libspdk_event_fsdev.so 00:06:23.456 SYMLINK libspdk_event_keyring.so 00:06:23.456 SYMLINK libspdk_event_vhost_blk.so 00:06:23.715 SYMLINK libspdk_event_sock.so 00:06:23.715 SYMLINK libspdk_event_iobuf.so 00:06:23.715 SYMLINK libspdk_event_vmd.so 00:06:23.974 CC module/event/subsystems/accel/accel.o 00:06:23.974 LIB libspdk_event_accel.a 00:06:23.974 SO libspdk_event_accel.so.6.0 00:06:24.234 SYMLINK libspdk_event_accel.so 00:06:24.493 CC module/event/subsystems/bdev/bdev.o 00:06:24.754 LIB libspdk_event_bdev.a 00:06:24.754 SO libspdk_event_bdev.so.6.0 00:06:24.754 SYMLINK libspdk_event_bdev.so 00:06:25.013 CC module/event/subsystems/scsi/scsi.o 00:06:25.013 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:25.013 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:25.013 CC module/event/subsystems/ublk/ublk.o 00:06:25.013 CC module/event/subsystems/nbd/nbd.o 00:06:25.013 LIB libspdk_event_ublk.a 00:06:25.271 LIB libspdk_event_scsi.a 00:06:25.271 SO libspdk_event_ublk.so.3.0 00:06:25.271 LIB libspdk_event_nbd.a 00:06:25.271 SO libspdk_event_nbd.so.6.0 00:06:25.271 SO libspdk_event_scsi.so.6.0 00:06:25.271 SYMLINK libspdk_event_ublk.so 00:06:25.271 SYMLINK libspdk_event_nbd.so 00:06:25.271 SYMLINK libspdk_event_scsi.so 00:06:25.271 LIB libspdk_event_nvmf.a 00:06:25.271 SO libspdk_event_nvmf.so.6.0 00:06:25.271 SYMLINK libspdk_event_nvmf.so 00:06:25.531 CC module/event/subsystems/iscsi/iscsi.o 00:06:25.531 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:25.789 LIB libspdk_event_vhost_scsi.a 00:06:25.789 SO libspdk_event_vhost_scsi.so.3.0 00:06:25.789 LIB libspdk_event_iscsi.a 00:06:25.789 SYMLINK libspdk_event_vhost_scsi.so 00:06:25.789 SO libspdk_event_iscsi.so.6.0 00:06:25.789 SYMLINK libspdk_event_iscsi.so 00:06:26.048 SO libspdk.so.6.0 00:06:26.048 SYMLINK libspdk.so 00:06:26.307 CC app/trace_record/trace_record.o 00:06:26.307 CC app/spdk_lspci/spdk_lspci.o 00:06:26.307 CXX app/trace/trace.o 00:06:26.307 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:26.307 CC app/nvmf_tgt/nvmf_main.o 00:06:26.307 CC app/iscsi_tgt/iscsi_tgt.o 00:06:26.307 CC app/spdk_tgt/spdk_tgt.o 00:06:26.307 CC examples/util/zipf/zipf.o 00:06:26.307 CC test/thread/poller_perf/poller_perf.o 00:06:26.307 CC examples/ioat/perf/perf.o 00:06:26.565 LINK spdk_lspci 00:06:26.565 LINK spdk_trace_record 00:06:26.565 LINK poller_perf 00:06:26.565 LINK interrupt_tgt 00:06:26.565 LINK nvmf_tgt 00:06:26.565 LINK iscsi_tgt 00:06:26.565 LINK zipf 00:06:26.565 LINK spdk_tgt 00:06:26.565 LINK ioat_perf 00:06:26.824 LINK spdk_trace 00:06:26.824 CC examples/ioat/verify/verify.o 00:06:26.824 CC app/spdk_nvme_perf/perf.o 00:06:26.824 CC app/spdk_nvme_identify/identify.o 00:06:26.824 CC app/spdk_nvme_discover/discovery_aer.o 00:06:26.824 TEST_HEADER include/spdk/accel.h 00:06:26.824 TEST_HEADER include/spdk/accel_module.h 00:06:26.824 TEST_HEADER include/spdk/assert.h 00:06:26.824 TEST_HEADER include/spdk/barrier.h 00:06:26.824 TEST_HEADER include/spdk/base64.h 00:06:26.824 CC app/spdk_top/spdk_top.o 00:06:26.824 TEST_HEADER include/spdk/bdev.h 00:06:26.824 TEST_HEADER include/spdk/bdev_module.h 00:06:26.824 TEST_HEADER include/spdk/bdev_zone.h 00:06:26.824 TEST_HEADER include/spdk/bit_array.h 00:06:26.824 TEST_HEADER include/spdk/bit_pool.h 00:06:26.824 TEST_HEADER include/spdk/blob_bdev.h 00:06:26.824 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:26.824 CC test/dma/test_dma/test_dma.o 00:06:26.824 TEST_HEADER include/spdk/blobfs.h 00:06:26.824 TEST_HEADER include/spdk/blob.h 00:06:26.824 TEST_HEADER include/spdk/conf.h 00:06:26.824 TEST_HEADER include/spdk/config.h 00:06:26.824 TEST_HEADER include/spdk/cpuset.h 00:06:26.824 TEST_HEADER include/spdk/crc16.h 00:06:26.824 TEST_HEADER include/spdk/crc32.h 00:06:26.824 TEST_HEADER include/spdk/crc64.h 00:06:26.824 TEST_HEADER include/spdk/dif.h 00:06:26.824 TEST_HEADER include/spdk/dma.h 00:06:26.824 TEST_HEADER include/spdk/endian.h 00:06:26.824 TEST_HEADER include/spdk/env_dpdk.h 00:06:27.083 TEST_HEADER include/spdk/env.h 00:06:27.083 TEST_HEADER include/spdk/event.h 00:06:27.083 TEST_HEADER include/spdk/fd_group.h 00:06:27.083 TEST_HEADER include/spdk/fd.h 00:06:27.083 TEST_HEADER include/spdk/file.h 00:06:27.083 TEST_HEADER include/spdk/fsdev.h 00:06:27.083 TEST_HEADER include/spdk/fsdev_module.h 00:06:27.083 TEST_HEADER include/spdk/ftl.h 00:06:27.083 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:27.083 TEST_HEADER include/spdk/gpt_spec.h 00:06:27.083 TEST_HEADER include/spdk/hexlify.h 00:06:27.083 TEST_HEADER include/spdk/histogram_data.h 00:06:27.083 TEST_HEADER include/spdk/idxd.h 00:06:27.083 TEST_HEADER include/spdk/idxd_spec.h 00:06:27.083 TEST_HEADER include/spdk/init.h 00:06:27.083 TEST_HEADER include/spdk/ioat.h 00:06:27.083 TEST_HEADER include/spdk/ioat_spec.h 00:06:27.083 TEST_HEADER include/spdk/iscsi_spec.h 00:06:27.083 TEST_HEADER include/spdk/json.h 00:06:27.083 TEST_HEADER include/spdk/jsonrpc.h 00:06:27.083 TEST_HEADER include/spdk/keyring.h 00:06:27.083 TEST_HEADER include/spdk/keyring_module.h 00:06:27.083 TEST_HEADER include/spdk/likely.h 00:06:27.083 TEST_HEADER include/spdk/log.h 00:06:27.083 CC test/app/bdev_svc/bdev_svc.o 00:06:27.083 TEST_HEADER include/spdk/lvol.h 00:06:27.083 TEST_HEADER include/spdk/md5.h 00:06:27.083 TEST_HEADER include/spdk/memory.h 00:06:27.083 TEST_HEADER include/spdk/mmio.h 00:06:27.083 TEST_HEADER include/spdk/nbd.h 00:06:27.083 TEST_HEADER include/spdk/net.h 00:06:27.083 TEST_HEADER include/spdk/notify.h 00:06:27.083 TEST_HEADER include/spdk/nvme.h 00:06:27.083 TEST_HEADER include/spdk/nvme_intel.h 00:06:27.083 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:27.083 LINK verify 00:06:27.083 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:27.083 TEST_HEADER include/spdk/nvme_spec.h 00:06:27.083 TEST_HEADER include/spdk/nvme_zns.h 00:06:27.083 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:27.083 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:27.083 TEST_HEADER include/spdk/nvmf.h 00:06:27.083 TEST_HEADER include/spdk/nvmf_spec.h 00:06:27.083 TEST_HEADER include/spdk/nvmf_transport.h 00:06:27.083 TEST_HEADER include/spdk/opal.h 00:06:27.083 TEST_HEADER include/spdk/opal_spec.h 00:06:27.083 TEST_HEADER include/spdk/pci_ids.h 00:06:27.083 TEST_HEADER include/spdk/pipe.h 00:06:27.083 TEST_HEADER include/spdk/queue.h 00:06:27.083 TEST_HEADER include/spdk/reduce.h 00:06:27.083 TEST_HEADER include/spdk/rpc.h 00:06:27.083 TEST_HEADER include/spdk/scheduler.h 00:06:27.083 TEST_HEADER include/spdk/scsi.h 00:06:27.083 TEST_HEADER include/spdk/scsi_spec.h 00:06:27.083 TEST_HEADER include/spdk/sock.h 00:06:27.083 TEST_HEADER include/spdk/stdinc.h 00:06:27.083 TEST_HEADER include/spdk/string.h 00:06:27.083 TEST_HEADER include/spdk/thread.h 00:06:27.083 TEST_HEADER include/spdk/trace.h 00:06:27.083 TEST_HEADER include/spdk/trace_parser.h 00:06:27.083 TEST_HEADER include/spdk/tree.h 00:06:27.083 TEST_HEADER include/spdk/ublk.h 00:06:27.083 TEST_HEADER include/spdk/util.h 00:06:27.083 TEST_HEADER include/spdk/uuid.h 00:06:27.083 CC app/spdk_dd/spdk_dd.o 00:06:27.083 TEST_HEADER include/spdk/version.h 00:06:27.083 CC examples/thread/thread/thread_ex.o 00:06:27.083 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:27.083 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:27.083 TEST_HEADER include/spdk/vhost.h 00:06:27.083 TEST_HEADER include/spdk/vmd.h 00:06:27.083 TEST_HEADER include/spdk/xor.h 00:06:27.083 TEST_HEADER include/spdk/zipf.h 00:06:27.083 CXX test/cpp_headers/accel.o 00:06:27.083 LINK spdk_nvme_discover 00:06:27.342 LINK bdev_svc 00:06:27.342 CXX test/cpp_headers/accel_module.o 00:06:27.342 CC examples/sock/hello_world/hello_sock.o 00:06:27.342 LINK thread 00:06:27.342 LINK test_dma 00:06:27.342 CC examples/vmd/lsvmd/lsvmd.o 00:06:27.600 CXX test/cpp_headers/assert.o 00:06:27.600 LINK spdk_dd 00:06:27.600 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:27.600 LINK hello_sock 00:06:27.600 LINK lsvmd 00:06:27.600 LINK spdk_nvme_identify 00:06:27.600 CXX test/cpp_headers/barrier.o 00:06:27.600 LINK spdk_nvme_perf 00:06:27.859 CC examples/idxd/perf/perf.o 00:06:27.859 LINK spdk_top 00:06:27.859 CC examples/vmd/led/led.o 00:06:27.859 CC test/app/histogram_perf/histogram_perf.o 00:06:27.859 CXX test/cpp_headers/base64.o 00:06:27.859 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:27.859 CC examples/accel/perf/accel_perf.o 00:06:28.118 CC app/vhost/vhost.o 00:06:28.118 LINK nvme_fuzz 00:06:28.118 LINK led 00:06:28.118 CC app/fio/nvme/fio_plugin.o 00:06:28.118 LINK histogram_perf 00:06:28.118 CXX test/cpp_headers/bdev.o 00:06:28.118 LINK idxd_perf 00:06:28.118 CC app/fio/bdev/fio_plugin.o 00:06:28.118 LINK hello_fsdev 00:06:28.118 CXX test/cpp_headers/bdev_module.o 00:06:28.118 CXX test/cpp_headers/bdev_zone.o 00:06:28.118 LINK vhost 00:06:28.118 CXX test/cpp_headers/bit_array.o 00:06:28.118 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:28.376 CXX test/cpp_headers/bit_pool.o 00:06:28.376 CXX test/cpp_headers/blob_bdev.o 00:06:28.376 CXX test/cpp_headers/blobfs_bdev.o 00:06:28.376 CXX test/cpp_headers/blobfs.o 00:06:28.376 LINK accel_perf 00:06:28.638 CC test/event/event_perf/event_perf.o 00:06:28.638 CXX test/cpp_headers/blob.o 00:06:28.638 LINK spdk_nvme 00:06:28.638 LINK spdk_bdev 00:06:28.638 CC test/event/reactor/reactor.o 00:06:28.638 CC test/env/mem_callbacks/mem_callbacks.o 00:06:28.638 CC test/event/reactor_perf/reactor_perf.o 00:06:28.638 LINK event_perf 00:06:28.899 CC test/nvme/aer/aer.o 00:06:28.899 LINK reactor 00:06:28.899 CXX test/cpp_headers/conf.o 00:06:28.899 CC test/nvme/reset/reset.o 00:06:28.899 CC examples/blob/hello_world/hello_blob.o 00:06:28.899 CXX test/cpp_headers/config.o 00:06:28.899 LINK reactor_perf 00:06:28.899 CC test/event/app_repeat/app_repeat.o 00:06:29.158 CXX test/cpp_headers/cpuset.o 00:06:29.158 CC test/event/scheduler/scheduler.o 00:06:29.158 LINK reset 00:06:29.158 LINK app_repeat 00:06:29.158 LINK aer 00:06:29.158 LINK hello_blob 00:06:29.158 CC examples/blob/cli/blobcli.o 00:06:29.158 CC test/env/vtophys/vtophys.o 00:06:29.158 CXX test/cpp_headers/crc16.o 00:06:29.158 CXX test/cpp_headers/crc32.o 00:06:29.430 LINK mem_callbacks 00:06:29.430 LINK vtophys 00:06:29.430 CC test/nvme/sgl/sgl.o 00:06:29.430 LINK scheduler 00:06:29.430 CC test/nvme/e2edp/nvme_dp.o 00:06:29.430 CC test/nvme/overhead/overhead.o 00:06:29.430 CXX test/cpp_headers/crc64.o 00:06:29.430 CC test/nvme/err_injection/err_injection.o 00:06:29.430 CC test/nvme/startup/startup.o 00:06:29.716 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:29.716 CXX test/cpp_headers/dif.o 00:06:29.716 LINK blobcli 00:06:29.716 LINK sgl 00:06:29.716 LINK startup 00:06:29.716 LINK nvme_dp 00:06:29.716 LINK err_injection 00:06:29.716 LINK overhead 00:06:29.716 CC test/app/jsoncat/jsoncat.o 00:06:29.716 CXX test/cpp_headers/dma.o 00:06:29.716 LINK env_dpdk_post_init 00:06:29.716 CXX test/cpp_headers/endian.o 00:06:29.975 LINK jsoncat 00:06:29.975 LINK iscsi_fuzz 00:06:29.975 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:29.975 CC test/nvme/reserve/reserve.o 00:06:29.975 CC test/nvme/simple_copy/simple_copy.o 00:06:29.975 CC test/nvme/connect_stress/connect_stress.o 00:06:29.975 CXX test/cpp_headers/env_dpdk.o 00:06:29.975 CC test/env/memory/memory_ut.o 00:06:29.975 CC test/env/pci/pci_ut.o 00:06:29.975 CC examples/nvme/hello_world/hello_world.o 00:06:29.975 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:30.233 CXX test/cpp_headers/env.o 00:06:30.233 LINK connect_stress 00:06:30.234 CC test/nvme/boot_partition/boot_partition.o 00:06:30.234 LINK reserve 00:06:30.234 LINK simple_copy 00:06:30.234 CC test/nvme/compliance/nvme_compliance.o 00:06:30.234 LINK hello_world 00:06:30.234 CXX test/cpp_headers/event.o 00:06:30.492 LINK boot_partition 00:06:30.492 LINK pci_ut 00:06:30.492 CC test/nvme/fused_ordering/fused_ordering.o 00:06:30.492 LINK vhost_fuzz 00:06:30.492 CC examples/nvme/reconnect/reconnect.o 00:06:30.492 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:30.492 CC examples/nvme/arbitration/arbitration.o 00:06:30.492 CXX test/cpp_headers/fd_group.o 00:06:30.492 LINK nvme_compliance 00:06:30.492 CC examples/nvme/hotplug/hotplug.o 00:06:30.751 LINK fused_ordering 00:06:30.751 CC test/app/stub/stub.o 00:06:30.751 CXX test/cpp_headers/fd.o 00:06:30.751 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:30.751 CC examples/nvme/abort/abort.o 00:06:30.751 LINK reconnect 00:06:30.752 LINK hotplug 00:06:30.752 LINK arbitration 00:06:30.752 LINK stub 00:06:30.752 CXX test/cpp_headers/file.o 00:06:31.010 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:31.011 LINK cmb_copy 00:06:31.011 LINK nvme_manage 00:06:31.011 CXX test/cpp_headers/fsdev.o 00:06:31.011 CC test/nvme/fdp/fdp.o 00:06:31.011 CC test/nvme/cuse/cuse.o 00:06:31.011 CXX test/cpp_headers/fsdev_module.o 00:06:31.011 LINK doorbell_aers 00:06:31.011 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:31.269 CXX test/cpp_headers/ftl.o 00:06:31.269 LINK memory_ut 00:06:31.269 LINK abort 00:06:31.269 CXX test/cpp_headers/fuse_dispatcher.o 00:06:31.269 CC examples/bdev/hello_world/hello_bdev.o 00:06:31.269 LINK pmr_persistence 00:06:31.269 CXX test/cpp_headers/gpt_spec.o 00:06:31.269 CXX test/cpp_headers/hexlify.o 00:06:31.269 CC test/rpc_client/rpc_client_test.o 00:06:31.528 LINK fdp 00:06:31.528 CXX test/cpp_headers/histogram_data.o 00:06:31.528 CC examples/bdev/bdevperf/bdevperf.o 00:06:31.528 CXX test/cpp_headers/idxd.o 00:06:31.528 LINK hello_bdev 00:06:31.528 LINK rpc_client_test 00:06:31.528 CC test/accel/dif/dif.o 00:06:31.528 CXX test/cpp_headers/idxd_spec.o 00:06:31.786 CXX test/cpp_headers/init.o 00:06:31.787 CC test/blobfs/mkfs/mkfs.o 00:06:31.787 CXX test/cpp_headers/ioat.o 00:06:31.787 CC test/lvol/esnap/esnap.o 00:06:31.787 CXX test/cpp_headers/ioat_spec.o 00:06:31.787 CXX test/cpp_headers/iscsi_spec.o 00:06:31.787 CXX test/cpp_headers/json.o 00:06:31.787 CXX test/cpp_headers/jsonrpc.o 00:06:31.787 CXX test/cpp_headers/keyring.o 00:06:32.046 CXX test/cpp_headers/keyring_module.o 00:06:32.046 CXX test/cpp_headers/likely.o 00:06:32.046 LINK mkfs 00:06:32.046 CXX test/cpp_headers/log.o 00:06:32.046 CXX test/cpp_headers/lvol.o 00:06:32.046 CXX test/cpp_headers/md5.o 00:06:32.046 CXX test/cpp_headers/memory.o 00:06:32.046 CXX test/cpp_headers/mmio.o 00:06:32.046 CXX test/cpp_headers/nbd.o 00:06:32.046 CXX test/cpp_headers/net.o 00:06:32.046 CXX test/cpp_headers/notify.o 00:06:32.305 CXX test/cpp_headers/nvme.o 00:06:32.305 CXX test/cpp_headers/nvme_intel.o 00:06:32.305 CXX test/cpp_headers/nvme_ocssd.o 00:06:32.305 LINK dif 00:06:32.305 LINK bdevperf 00:06:32.305 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:32.305 CXX test/cpp_headers/nvme_spec.o 00:06:32.305 CXX test/cpp_headers/nvme_zns.o 00:06:32.305 CXX test/cpp_headers/nvmf_cmd.o 00:06:32.305 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:32.563 LINK cuse 00:06:32.563 CXX test/cpp_headers/nvmf.o 00:06:32.563 CXX test/cpp_headers/nvmf_spec.o 00:06:32.563 CXX test/cpp_headers/nvmf_transport.o 00:06:32.563 CXX test/cpp_headers/opal.o 00:06:32.563 CXX test/cpp_headers/opal_spec.o 00:06:32.563 CXX test/cpp_headers/pci_ids.o 00:06:32.563 CXX test/cpp_headers/pipe.o 00:06:32.563 CXX test/cpp_headers/queue.o 00:06:32.823 CC test/bdev/bdevio/bdevio.o 00:06:32.823 CXX test/cpp_headers/reduce.o 00:06:32.823 CXX test/cpp_headers/rpc.o 00:06:32.823 CC examples/nvmf/nvmf/nvmf.o 00:06:32.823 CXX test/cpp_headers/scheduler.o 00:06:32.823 CXX test/cpp_headers/scsi.o 00:06:32.823 CXX test/cpp_headers/scsi_spec.o 00:06:32.823 CXX test/cpp_headers/sock.o 00:06:32.823 CXX test/cpp_headers/stdinc.o 00:06:32.823 CXX test/cpp_headers/string.o 00:06:32.823 CXX test/cpp_headers/thread.o 00:06:32.823 CXX test/cpp_headers/trace.o 00:06:32.823 CXX test/cpp_headers/trace_parser.o 00:06:33.083 CXX test/cpp_headers/tree.o 00:06:33.083 CXX test/cpp_headers/ublk.o 00:06:33.083 CXX test/cpp_headers/util.o 00:06:33.083 LINK nvmf 00:06:33.083 CXX test/cpp_headers/uuid.o 00:06:33.083 CXX test/cpp_headers/version.o 00:06:33.083 CXX test/cpp_headers/vfio_user_pci.o 00:06:33.083 CXX test/cpp_headers/vfio_user_spec.o 00:06:33.083 CXX test/cpp_headers/vhost.o 00:06:33.083 LINK bdevio 00:06:33.083 CXX test/cpp_headers/vmd.o 00:06:33.083 CXX test/cpp_headers/xor.o 00:06:33.341 CXX test/cpp_headers/zipf.o 00:06:36.628 LINK esnap 00:06:36.887 00:06:36.887 real 1m33.075s 00:06:36.887 user 8m17.787s 00:06:36.887 sys 1m39.856s 00:06:36.887 ************************************ 00:06:36.887 END TEST make 00:06:36.887 ************************************ 00:06:36.887 17:07:37 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:36.887 17:07:37 make -- common/autotest_common.sh@10 -- $ set +x 00:06:36.887 17:07:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:36.887 17:07:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:36.887 17:07:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:36.887 17:07:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:36.887 17:07:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:36.887 17:07:37 -- pm/common@44 -- $ pid=5257 00:06:36.887 17:07:37 -- pm/common@50 -- $ kill -TERM 5257 00:06:36.887 17:07:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:36.887 17:07:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:36.887 17:07:37 -- pm/common@44 -- $ pid=5259 00:06:36.887 17:07:37 -- pm/common@50 -- $ kill -TERM 5259 00:06:36.887 17:07:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:36.887 17:07:37 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:36.887 17:07:37 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.887 17:07:37 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.887 17:07:37 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.887 17:07:37 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.887 17:07:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.887 17:07:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.887 17:07:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.887 17:07:37 -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.887 17:07:37 -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.887 17:07:37 -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.887 17:07:37 -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.887 17:07:37 -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.887 17:07:37 -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.887 17:07:37 -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.887 17:07:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.887 17:07:37 -- scripts/common.sh@344 -- # case "$op" in 00:06:36.887 17:07:37 -- scripts/common.sh@345 -- # : 1 00:06:36.887 17:07:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.887 17:07:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.887 17:07:37 -- scripts/common.sh@365 -- # decimal 1 00:06:37.146 17:07:37 -- scripts/common.sh@353 -- # local d=1 00:06:37.146 17:07:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.146 17:07:37 -- scripts/common.sh@355 -- # echo 1 00:06:37.146 17:07:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.146 17:07:37 -- scripts/common.sh@366 -- # decimal 2 00:06:37.146 17:07:37 -- scripts/common.sh@353 -- # local d=2 00:06:37.146 17:07:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.146 17:07:37 -- scripts/common.sh@355 -- # echo 2 00:06:37.146 17:07:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.146 17:07:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.146 17:07:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.146 17:07:37 -- scripts/common.sh@368 -- # return 0 00:06:37.146 17:07:37 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.146 17:07:37 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.146 --rc genhtml_branch_coverage=1 00:06:37.146 --rc genhtml_function_coverage=1 00:06:37.146 --rc genhtml_legend=1 00:06:37.146 --rc geninfo_all_blocks=1 00:06:37.146 --rc geninfo_unexecuted_blocks=1 00:06:37.146 00:06:37.146 ' 00:06:37.146 17:07:37 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.146 --rc genhtml_branch_coverage=1 00:06:37.146 --rc genhtml_function_coverage=1 00:06:37.146 --rc genhtml_legend=1 00:06:37.146 --rc geninfo_all_blocks=1 00:06:37.146 --rc geninfo_unexecuted_blocks=1 00:06:37.146 00:06:37.146 ' 00:06:37.146 17:07:37 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.146 --rc genhtml_branch_coverage=1 00:06:37.146 --rc genhtml_function_coverage=1 00:06:37.146 --rc genhtml_legend=1 00:06:37.146 --rc geninfo_all_blocks=1 00:06:37.146 --rc geninfo_unexecuted_blocks=1 00:06:37.146 00:06:37.146 ' 00:06:37.146 17:07:37 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.146 --rc genhtml_branch_coverage=1 00:06:37.146 --rc genhtml_function_coverage=1 00:06:37.146 --rc genhtml_legend=1 00:06:37.146 --rc geninfo_all_blocks=1 00:06:37.147 --rc geninfo_unexecuted_blocks=1 00:06:37.147 00:06:37.147 ' 00:06:37.147 17:07:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.147 17:07:37 -- nvmf/common.sh@7 -- # uname -s 00:06:37.147 17:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.147 17:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.147 17:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.147 17:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.147 17:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.147 17:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.147 17:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.147 17:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.147 17:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.147 17:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.147 17:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:06:37.147 17:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:06:37.147 17:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.147 17:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.147 17:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:37.147 17:07:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.147 17:07:37 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.147 17:07:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.147 17:07:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.147 17:07:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.147 17:07:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.147 17:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.147 17:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.147 17:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.147 17:07:37 -- paths/export.sh@5 -- # export PATH 00:06:37.147 17:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.147 17:07:37 -- nvmf/common.sh@51 -- # : 0 00:06:37.147 17:07:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.147 17:07:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.147 17:07:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.147 17:07:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.147 17:07:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.147 17:07:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.147 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.147 17:07:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.147 17:07:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.147 17:07:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.147 17:07:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:37.147 17:07:37 -- spdk/autotest.sh@32 -- # uname -s 00:06:37.147 17:07:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:37.147 17:07:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:37.147 17:07:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:37.147 17:07:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:37.147 17:07:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:37.147 17:07:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:37.147 17:07:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:37.147 17:07:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:37.147 17:07:37 -- spdk/autotest.sh@48 -- # udevadm_pid=54368 00:06:37.147 17:07:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:37.147 17:07:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:37.147 17:07:37 -- pm/common@17 -- # local monitor 00:06:37.147 17:07:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:37.147 17:07:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:37.147 17:07:37 -- pm/common@25 -- # sleep 1 00:06:37.147 17:07:37 -- pm/common@21 -- # date +%s 00:06:37.147 17:07:37 -- pm/common@21 -- # date +%s 00:06:37.147 17:07:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730740057 00:06:37.147 17:07:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730740057 00:06:37.147 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730740057_collect-vmstat.pm.log 00:06:37.147 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730740057_collect-cpu-load.pm.log 00:06:38.083 17:07:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:38.083 17:07:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:38.083 17:07:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.083 17:07:38 -- common/autotest_common.sh@10 -- # set +x 00:06:38.083 17:07:38 -- spdk/autotest.sh@59 -- # create_test_list 00:06:38.083 17:07:38 -- common/autotest_common.sh@750 -- # xtrace_disable 00:06:38.083 17:07:38 -- common/autotest_common.sh@10 -- # set +x 00:06:38.083 17:07:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:38.083 17:07:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:38.083 17:07:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:38.083 17:07:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:38.083 17:07:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:38.083 17:07:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:38.083 17:07:38 -- common/autotest_common.sh@1455 -- # uname 00:06:38.083 17:07:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:38.083 17:07:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:38.083 17:07:38 -- common/autotest_common.sh@1475 -- # uname 00:06:38.083 17:07:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:38.083 17:07:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:38.083 17:07:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:38.342 lcov: LCOV version 1.15 00:06:38.342 17:07:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:56.500 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:56.500 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:11.460 17:08:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:11.460 17:08:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.460 17:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:11.460 17:08:09 -- spdk/autotest.sh@78 -- # rm -f 00:07:11.460 17:08:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:11.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:11.460 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:11.460 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:11.460 17:08:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:11.460 17:08:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:11.460 17:08:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:11.460 17:08:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:11.460 17:08:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:11.460 17:08:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:11.460 17:08:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:11.460 17:08:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:11.460 17:08:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:11.460 17:08:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:11.460 17:08:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:11.460 17:08:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:11.460 17:08:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:11.460 17:08:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:11.460 17:08:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:11.460 17:08:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:07:11.460 17:08:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:07:11.460 17:08:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:11.460 17:08:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:11.460 17:08:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:11.460 17:08:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:07:11.460 17:08:10 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:07:11.460 17:08:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:11.460 17:08:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:11.460 17:08:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:11.460 17:08:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:11.460 17:08:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:11.460 17:08:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:11.460 17:08:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:11.460 17:08:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:11.460 No valid GPT data, bailing 00:07:11.460 17:08:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:11.460 17:08:10 -- scripts/common.sh@394 -- # pt= 00:07:11.460 17:08:10 -- scripts/common.sh@395 -- # return 1 00:07:11.460 17:08:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:11.460 1+0 records in 00:07:11.460 1+0 records out 00:07:11.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512157 s, 205 MB/s 00:07:11.460 17:08:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:11.460 17:08:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:11.460 17:08:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:11.460 17:08:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:11.460 17:08:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:11.460 No valid GPT data, bailing 00:07:11.460 17:08:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:11.460 17:08:10 -- scripts/common.sh@394 -- # pt= 00:07:11.460 17:08:10 -- scripts/common.sh@395 -- # return 1 00:07:11.460 17:08:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:11.460 1+0 records in 00:07:11.460 1+0 records out 00:07:11.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505681 s, 207 MB/s 00:07:11.460 17:08:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:11.460 17:08:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:11.460 17:08:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:11.460 17:08:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:11.460 17:08:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:11.460 No valid GPT data, bailing 00:07:11.460 17:08:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:11.460 17:08:10 -- scripts/common.sh@394 -- # pt= 00:07:11.460 17:08:10 -- scripts/common.sh@395 -- # return 1 00:07:11.460 17:08:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:11.460 1+0 records in 00:07:11.460 1+0 records out 00:07:11.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523444 s, 200 MB/s 00:07:11.460 17:08:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:11.460 17:08:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:11.460 17:08:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:11.460 17:08:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:11.460 17:08:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:11.460 No valid GPT data, bailing 00:07:11.460 17:08:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:11.460 17:08:11 -- scripts/common.sh@394 -- # pt= 00:07:11.460 17:08:11 -- scripts/common.sh@395 -- # return 1 00:07:11.460 17:08:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:11.460 1+0 records in 00:07:11.460 1+0 records out 00:07:11.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00374961 s, 280 MB/s 00:07:11.460 17:08:11 -- spdk/autotest.sh@105 -- # sync 00:07:11.460 17:08:11 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:11.460 17:08:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:11.460 17:08:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:12.398 17:08:13 -- spdk/autotest.sh@111 -- # uname -s 00:07:12.398 17:08:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:12.398 17:08:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:12.398 17:08:13 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:12.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.966 Hugepages 00:07:12.966 node hugesize free / total 00:07:12.966 node0 1048576kB 0 / 0 00:07:12.966 node0 2048kB 0 / 0 00:07:12.966 00:07:12.966 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:13.225 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:13.225 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:13.225 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:13.225 17:08:13 -- spdk/autotest.sh@117 -- # uname -s 00:07:13.225 17:08:13 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:13.225 17:08:13 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:13.225 17:08:13 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:14.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:14.162 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:14.162 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:14.162 17:08:14 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:15.100 17:08:15 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:15.100 17:08:15 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:15.100 17:08:15 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:15.100 17:08:15 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:15.100 17:08:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:15.100 17:08:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:15.100 17:08:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:15.100 17:08:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:15.100 17:08:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:15.100 17:08:15 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:07:15.100 17:08:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:15.100 17:08:15 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:15.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:15.668 Waiting for block devices as requested 00:07:15.668 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.668 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:15.668 17:08:16 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:15.668 17:08:16 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:15.668 17:08:16 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:15.668 17:08:16 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:07:15.668 17:08:16 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:15.668 17:08:16 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:15.668 17:08:16 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:15.668 17:08:16 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:07:15.668 17:08:16 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:07:15.668 17:08:16 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:07:15.668 17:08:16 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:07:15.668 17:08:16 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:15.668 17:08:16 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:15.927 17:08:16 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:07:15.927 17:08:16 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:15.927 17:08:16 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:15.927 17:08:16 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:15.927 17:08:16 -- common/autotest_common.sh@1541 -- # continue 00:07:15.927 17:08:16 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:15.927 17:08:16 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:15.927 17:08:16 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:15.927 17:08:16 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:07:15.927 17:08:16 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:15.927 17:08:16 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:15.927 17:08:16 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:15.927 17:08:16 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:15.927 17:08:16 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:15.927 17:08:16 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:15.927 17:08:16 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:15.927 17:08:16 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:15.927 17:08:16 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:15.927 17:08:16 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:07:15.927 17:08:16 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:15.927 17:08:16 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:15.927 17:08:16 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:15.927 17:08:16 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:15.927 17:08:16 -- common/autotest_common.sh@1541 -- # continue 00:07:15.928 17:08:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:15.928 17:08:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.928 17:08:16 -- common/autotest_common.sh@10 -- # set +x 00:07:15.928 17:08:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:15.928 17:08:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.928 17:08:16 -- common/autotest_common.sh@10 -- # set +x 00:07:15.928 17:08:16 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:16.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:16.755 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.755 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.755 17:08:17 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:16.755 17:08:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.755 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:16.755 17:08:17 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:16.755 17:08:17 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:16.755 17:08:17 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:16.755 17:08:17 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:16.755 17:08:17 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:16.755 17:08:17 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:16.755 17:08:17 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:16.755 17:08:17 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:16.755 17:08:17 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:16.755 17:08:17 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:16.755 17:08:17 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:16.755 17:08:17 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:16.755 17:08:17 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:16.755 17:08:17 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:07:16.755 17:08:17 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:16.755 17:08:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:16.755 17:08:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:16.755 17:08:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:07:16.755 17:08:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:16.755 17:08:17 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:16.755 17:08:17 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:16.755 17:08:17 -- common/autotest_common.sh@1564 -- # device=0x0010 00:07:16.755 17:08:17 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:16.755 17:08:17 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:07:16.755 17:08:17 -- common/autotest_common.sh@1570 -- # return 0 00:07:16.755 17:08:17 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:07:16.755 17:08:17 -- common/autotest_common.sh@1578 -- # return 0 00:07:16.755 17:08:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:16.755 17:08:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:16.755 17:08:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:16.755 17:08:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:16.755 17:08:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:16.755 17:08:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.755 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:17.014 17:08:17 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:07:17.014 17:08:17 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:07:17.014 17:08:17 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:07:17.014 17:08:17 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:17.014 17:08:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.014 17:08:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.014 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:17.014 ************************************ 00:07:17.014 START TEST env 00:07:17.014 ************************************ 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:17.014 * Looking for test storage... 00:07:17.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:17.014 17:08:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.014 17:08:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.014 17:08:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.014 17:08:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.014 17:08:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.014 17:08:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.014 17:08:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.014 17:08:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.014 17:08:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.014 17:08:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.014 17:08:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.014 17:08:17 env -- scripts/common.sh@344 -- # case "$op" in 00:07:17.014 17:08:17 env -- scripts/common.sh@345 -- # : 1 00:07:17.014 17:08:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.014 17:08:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.014 17:08:17 env -- scripts/common.sh@365 -- # decimal 1 00:07:17.014 17:08:17 env -- scripts/common.sh@353 -- # local d=1 00:07:17.014 17:08:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.014 17:08:17 env -- scripts/common.sh@355 -- # echo 1 00:07:17.014 17:08:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.014 17:08:17 env -- scripts/common.sh@366 -- # decimal 2 00:07:17.014 17:08:17 env -- scripts/common.sh@353 -- # local d=2 00:07:17.014 17:08:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.014 17:08:17 env -- scripts/common.sh@355 -- # echo 2 00:07:17.014 17:08:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.014 17:08:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.014 17:08:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.014 17:08:17 env -- scripts/common.sh@368 -- # return 0 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:17.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.014 --rc genhtml_branch_coverage=1 00:07:17.014 --rc genhtml_function_coverage=1 00:07:17.014 --rc genhtml_legend=1 00:07:17.014 --rc geninfo_all_blocks=1 00:07:17.014 --rc geninfo_unexecuted_blocks=1 00:07:17.014 00:07:17.014 ' 00:07:17.014 17:08:17 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:17.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.014 --rc genhtml_branch_coverage=1 00:07:17.014 --rc genhtml_function_coverage=1 00:07:17.014 --rc genhtml_legend=1 00:07:17.014 --rc geninfo_all_blocks=1 00:07:17.014 --rc geninfo_unexecuted_blocks=1 00:07:17.014 00:07:17.014 ' 00:07:17.015 17:08:17 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:17.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.015 --rc genhtml_branch_coverage=1 00:07:17.015 --rc genhtml_function_coverage=1 00:07:17.015 --rc genhtml_legend=1 00:07:17.015 --rc geninfo_all_blocks=1 00:07:17.015 --rc geninfo_unexecuted_blocks=1 00:07:17.015 00:07:17.015 ' 00:07:17.015 17:08:17 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:17.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.015 --rc genhtml_branch_coverage=1 00:07:17.015 --rc genhtml_function_coverage=1 00:07:17.015 --rc genhtml_legend=1 00:07:17.015 --rc geninfo_all_blocks=1 00:07:17.015 --rc geninfo_unexecuted_blocks=1 00:07:17.015 00:07:17.015 ' 00:07:17.015 17:08:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:17.015 17:08:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.015 17:08:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.015 17:08:17 env -- common/autotest_common.sh@10 -- # set +x 00:07:17.015 ************************************ 00:07:17.015 START TEST env_memory 00:07:17.015 ************************************ 00:07:17.015 17:08:17 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:17.015 00:07:17.015 00:07:17.015 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.015 http://cunit.sourceforge.net/ 00:07:17.015 00:07:17.015 00:07:17.015 Suite: memory 00:07:17.276 Test: alloc and free memory map ...[2024-11-04 17:08:17.840744] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:17.276 passed 00:07:17.276 Test: mem map translation ...[2024-11-04 17:08:17.872093] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:17.276 [2024-11-04 17:08:17.872160] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:17.276 [2024-11-04 17:08:17.872241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:17.276 [2024-11-04 17:08:17.872254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:17.276 passed 00:07:17.276 Test: mem map registration ...[2024-11-04 17:08:17.936241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:17.276 [2024-11-04 17:08:17.936330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:17.276 passed 00:07:17.276 Test: mem map adjacent registrations ...passed 00:07:17.276 00:07:17.276 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.276 suites 1 1 n/a 0 0 00:07:17.276 tests 4 4 4 0 0 00:07:17.276 asserts 152 152 152 0 n/a 00:07:17.276 00:07:17.276 Elapsed time = 0.221 seconds 00:07:17.276 00:07:17.276 real 0m0.238s 00:07:17.276 user 0m0.220s 00:07:17.276 sys 0m0.014s 00:07:17.276 17:08:18 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.276 17:08:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:17.276 ************************************ 00:07:17.276 END TEST env_memory 00:07:17.276 ************************************ 00:07:17.276 17:08:18 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:17.276 17:08:18 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.276 17:08:18 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.276 17:08:18 env -- common/autotest_common.sh@10 -- # set +x 00:07:17.540 ************************************ 00:07:17.540 START TEST env_vtophys 00:07:17.540 ************************************ 00:07:17.540 17:08:18 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:17.540 EAL: lib.eal log level changed from notice to debug 00:07:17.540 EAL: Detected lcore 0 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 1 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 2 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 3 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 4 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 5 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 6 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 7 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 8 as core 0 on socket 0 00:07:17.540 EAL: Detected lcore 9 as core 0 on socket 0 00:07:17.540 EAL: Maximum logical cores by configuration: 128 00:07:17.540 EAL: Detected CPU lcores: 10 00:07:17.540 EAL: Detected NUMA nodes: 1 00:07:17.540 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:17.540 EAL: Detected shared linkage of DPDK 00:07:17.540 EAL: No shared files mode enabled, IPC will be disabled 00:07:17.540 EAL: Selected IOVA mode 'PA' 00:07:17.540 EAL: Probing VFIO support... 00:07:17.540 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:17.540 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:17.540 EAL: Ask a virtual area of 0x2e000 bytes 00:07:17.540 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:17.540 EAL: Setting up physically contiguous memory... 00:07:17.540 EAL: Setting maximum number of open files to 524288 00:07:17.540 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:17.540 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:17.540 EAL: Ask a virtual area of 0x61000 bytes 00:07:17.540 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:17.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:17.540 EAL: Ask a virtual area of 0x400000000 bytes 00:07:17.540 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:17.540 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:17.540 EAL: Ask a virtual area of 0x61000 bytes 00:07:17.540 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:17.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:17.540 EAL: Ask a virtual area of 0x400000000 bytes 00:07:17.540 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:17.540 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:17.540 EAL: Ask a virtual area of 0x61000 bytes 00:07:17.540 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:17.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:17.540 EAL: Ask a virtual area of 0x400000000 bytes 00:07:17.540 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:17.540 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:17.540 EAL: Ask a virtual area of 0x61000 bytes 00:07:17.540 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:17.540 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:17.540 EAL: Ask a virtual area of 0x400000000 bytes 00:07:17.540 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:17.540 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:17.540 EAL: Hugepages will be freed exactly as allocated. 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: TSC frequency is ~2200000 KHz 00:07:17.540 EAL: Main lcore 0 is ready (tid=7f2bfd024a00;cpuset=[0]) 00:07:17.540 EAL: Trying to obtain current memory policy. 00:07:17.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.540 EAL: Restoring previous memory policy: 0 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was expanded by 2MB 00:07:17.540 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:17.540 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:17.540 EAL: Mem event callback 'spdk:(nil)' registered 00:07:17.540 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:17.540 00:07:17.540 00:07:17.540 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.540 http://cunit.sourceforge.net/ 00:07:17.540 00:07:17.540 00:07:17.540 Suite: components_suite 00:07:17.540 Test: vtophys_malloc_test ...passed 00:07:17.540 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:17.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.540 EAL: Restoring previous memory policy: 4 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was expanded by 4MB 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was shrunk by 4MB 00:07:17.540 EAL: Trying to obtain current memory policy. 00:07:17.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.540 EAL: Restoring previous memory policy: 4 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was expanded by 6MB 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was shrunk by 6MB 00:07:17.540 EAL: Trying to obtain current memory policy. 00:07:17.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.540 EAL: Restoring previous memory policy: 4 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was expanded by 10MB 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was shrunk by 10MB 00:07:17.540 EAL: Trying to obtain current memory policy. 00:07:17.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.540 EAL: Restoring previous memory policy: 4 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was expanded by 18MB 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was shrunk by 18MB 00:07:17.540 EAL: Trying to obtain current memory policy. 00:07:17.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.540 EAL: Restoring previous memory policy: 4 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was expanded by 34MB 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was shrunk by 34MB 00:07:17.540 EAL: Trying to obtain current memory policy. 00:07:17.540 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.540 EAL: Restoring previous memory policy: 4 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was expanded by 66MB 00:07:17.540 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.540 EAL: request: mp_malloc_sync 00:07:17.540 EAL: No shared files mode enabled, IPC is disabled 00:07:17.540 EAL: Heap on socket 0 was shrunk by 66MB 00:07:17.540 EAL: Trying to obtain current memory policy. 00:07:17.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.541 EAL: Restoring previous memory policy: 4 00:07:17.541 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.541 EAL: request: mp_malloc_sync 00:07:17.541 EAL: No shared files mode enabled, IPC is disabled 00:07:17.541 EAL: Heap on socket 0 was expanded by 130MB 00:07:17.800 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.800 EAL: request: mp_malloc_sync 00:07:17.800 EAL: No shared files mode enabled, IPC is disabled 00:07:17.800 EAL: Heap on socket 0 was shrunk by 130MB 00:07:17.800 EAL: Trying to obtain current memory policy. 00:07:17.800 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.800 EAL: Restoring previous memory policy: 4 00:07:17.800 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.800 EAL: request: mp_malloc_sync 00:07:17.800 EAL: No shared files mode enabled, IPC is disabled 00:07:17.800 EAL: Heap on socket 0 was expanded by 258MB 00:07:17.800 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.800 EAL: request: mp_malloc_sync 00:07:17.800 EAL: No shared files mode enabled, IPC is disabled 00:07:17.800 EAL: Heap on socket 0 was shrunk by 258MB 00:07:17.800 EAL: Trying to obtain current memory policy. 00:07:17.800 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:18.059 EAL: Restoring previous memory policy: 4 00:07:18.059 EAL: Calling mem event callback 'spdk:(nil)' 00:07:18.059 EAL: request: mp_malloc_sync 00:07:18.059 EAL: No shared files mode enabled, IPC is disabled 00:07:18.059 EAL: Heap on socket 0 was expanded by 514MB 00:07:18.059 EAL: Calling mem event callback 'spdk:(nil)' 00:07:18.319 EAL: request: mp_malloc_sync 00:07:18.319 EAL: No shared files mode enabled, IPC is disabled 00:07:18.319 EAL: Heap on socket 0 was shrunk by 514MB 00:07:18.319 EAL: Trying to obtain current memory policy. 00:07:18.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:18.578 EAL: Restoring previous memory policy: 4 00:07:18.579 EAL: Calling mem event callback 'spdk:(nil)' 00:07:18.579 EAL: request: mp_malloc_sync 00:07:18.579 EAL: No shared files mode enabled, IPC is disabled 00:07:18.579 EAL: Heap on socket 0 was expanded by 1026MB 00:07:18.579 EAL: Calling mem event callback 'spdk:(nil)' 00:07:18.838 passed 00:07:18.838 00:07:18.838 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.838 suites 1 1 n/a 0 0 00:07:18.838 tests 2 2 2 0 0 00:07:18.838 asserts 5365 5365 5365 0 n/a 00:07:18.838 00:07:18.838 Elapsed time = 1.261 seconds 00:07:18.838 EAL: request: mp_malloc_sync 00:07:18.838 EAL: No shared files mode enabled, IPC is disabled 00:07:18.838 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:18.838 EAL: Calling mem event callback 'spdk:(nil)' 00:07:18.838 EAL: request: mp_malloc_sync 00:07:18.838 EAL: No shared files mode enabled, IPC is disabled 00:07:18.838 EAL: Heap on socket 0 was shrunk by 2MB 00:07:18.838 EAL: No shared files mode enabled, IPC is disabled 00:07:18.838 EAL: No shared files mode enabled, IPC is disabled 00:07:18.838 EAL: No shared files mode enabled, IPC is disabled 00:07:18.838 00:07:18.838 real 0m1.473s 00:07:18.838 user 0m0.814s 00:07:18.838 sys 0m0.527s 00:07:18.838 17:08:19 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.838 17:08:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:18.838 ************************************ 00:07:18.838 END TEST env_vtophys 00:07:18.838 ************************************ 00:07:18.838 17:08:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:18.838 17:08:19 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.838 17:08:19 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.838 17:08:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:18.838 ************************************ 00:07:18.838 START TEST env_pci 00:07:18.838 ************************************ 00:07:18.838 17:08:19 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:18.838 00:07:18.838 00:07:18.838 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.838 http://cunit.sourceforge.net/ 00:07:18.838 00:07:18.838 00:07:18.838 Suite: pci 00:07:18.838 Test: pci_hook ...[2024-11-04 17:08:19.625839] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56608 has claimed it 00:07:18.838 passed 00:07:18.838 00:07:18.838 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.838 suites 1 1 n/a 0 0 00:07:18.838 tests 1 1 1 0 0 00:07:18.838 asserts 25 25 25 0 n/a 00:07:18.838 00:07:18.838 Elapsed time = 0.002 seconds 00:07:18.838 EAL: Cannot find device (10000:00:01.0) 00:07:18.838 EAL: Failed to attach device on primary process 00:07:18.838 00:07:18.838 real 0m0.021s 00:07:18.838 user 0m0.008s 00:07:18.838 sys 0m0.013s 00:07:18.838 17:08:19 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.838 17:08:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:18.838 ************************************ 00:07:18.838 END TEST env_pci 00:07:18.838 ************************************ 00:07:19.097 17:08:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:19.097 17:08:19 env -- env/env.sh@15 -- # uname 00:07:19.097 17:08:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:19.097 17:08:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:19.097 17:08:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:19.097 17:08:19 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:19.097 17:08:19 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.097 17:08:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:19.097 ************************************ 00:07:19.097 START TEST env_dpdk_post_init 00:07:19.097 ************************************ 00:07:19.097 17:08:19 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:19.097 EAL: Detected CPU lcores: 10 00:07:19.097 EAL: Detected NUMA nodes: 1 00:07:19.097 EAL: Detected shared linkage of DPDK 00:07:19.097 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:19.097 EAL: Selected IOVA mode 'PA' 00:07:19.097 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:19.097 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:19.097 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:19.097 Starting DPDK initialization... 00:07:19.097 Starting SPDK post initialization... 00:07:19.097 SPDK NVMe probe 00:07:19.097 Attaching to 0000:00:10.0 00:07:19.097 Attaching to 0000:00:11.0 00:07:19.097 Attached to 0000:00:10.0 00:07:19.097 Attached to 0000:00:11.0 00:07:19.097 Cleaning up... 00:07:19.097 00:07:19.097 real 0m0.191s 00:07:19.097 user 0m0.055s 00:07:19.097 sys 0m0.035s 00:07:19.097 17:08:19 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.097 ************************************ 00:07:19.097 END TEST env_dpdk_post_init 00:07:19.097 ************************************ 00:07:19.097 17:08:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:19.356 17:08:19 env -- env/env.sh@26 -- # uname 00:07:19.356 17:08:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:19.356 17:08:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:19.356 17:08:19 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.356 17:08:19 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.356 17:08:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:19.357 ************************************ 00:07:19.357 START TEST env_mem_callbacks 00:07:19.357 ************************************ 00:07:19.357 17:08:19 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:19.357 EAL: Detected CPU lcores: 10 00:07:19.357 EAL: Detected NUMA nodes: 1 00:07:19.357 EAL: Detected shared linkage of DPDK 00:07:19.357 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:19.357 EAL: Selected IOVA mode 'PA' 00:07:19.357 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:19.357 00:07:19.357 00:07:19.357 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.357 http://cunit.sourceforge.net/ 00:07:19.357 00:07:19.357 00:07:19.357 Suite: memory 00:07:19.357 Test: test ... 00:07:19.357 register 0x200000200000 2097152 00:07:19.357 malloc 3145728 00:07:19.357 register 0x200000400000 4194304 00:07:19.357 buf 0x200000500000 len 3145728 PASSED 00:07:19.357 malloc 64 00:07:19.357 buf 0x2000004fff40 len 64 PASSED 00:07:19.357 malloc 4194304 00:07:19.357 register 0x200000800000 6291456 00:07:19.357 buf 0x200000a00000 len 4194304 PASSED 00:07:19.357 free 0x200000500000 3145728 00:07:19.357 free 0x2000004fff40 64 00:07:19.357 unregister 0x200000400000 4194304 PASSED 00:07:19.357 free 0x200000a00000 4194304 00:07:19.357 unregister 0x200000800000 6291456 PASSED 00:07:19.357 malloc 8388608 00:07:19.357 register 0x200000400000 10485760 00:07:19.357 buf 0x200000600000 len 8388608 PASSED 00:07:19.357 free 0x200000600000 8388608 00:07:19.357 unregister 0x200000400000 10485760 PASSED 00:07:19.357 passed 00:07:19.357 00:07:19.357 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.357 suites 1 1 n/a 0 0 00:07:19.357 tests 1 1 1 0 0 00:07:19.357 asserts 15 15 15 0 n/a 00:07:19.357 00:07:19.357 Elapsed time = 0.010 seconds 00:07:19.357 00:07:19.357 real 0m0.151s 00:07:19.357 user 0m0.017s 00:07:19.357 sys 0m0.030s 00:07:19.357 17:08:20 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.357 ************************************ 00:07:19.357 END TEST env_mem_callbacks 00:07:19.357 ************************************ 00:07:19.357 17:08:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:19.357 00:07:19.357 real 0m2.566s 00:07:19.357 user 0m1.316s 00:07:19.357 sys 0m0.886s 00:07:19.357 17:08:20 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:19.357 17:08:20 env -- common/autotest_common.sh@10 -- # set +x 00:07:19.357 ************************************ 00:07:19.357 END TEST env 00:07:19.357 ************************************ 00:07:19.616 17:08:20 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:19.616 17:08:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:19.616 17:08:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:19.616 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:07:19.616 ************************************ 00:07:19.616 START TEST rpc 00:07:19.616 ************************************ 00:07:19.616 17:08:20 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:19.616 * Looking for test storage... 00:07:19.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:19.616 17:08:20 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:19.616 17:08:20 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:19.616 17:08:20 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:19.616 17:08:20 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:19.616 17:08:20 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.616 17:08:20 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.616 17:08:20 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.616 17:08:20 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.616 17:08:20 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.616 17:08:20 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.616 17:08:20 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.616 17:08:20 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.616 17:08:20 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.616 17:08:20 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.616 17:08:20 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.616 17:08:20 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:19.616 17:08:20 rpc -- scripts/common.sh@345 -- # : 1 00:07:19.616 17:08:20 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.616 17:08:20 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.616 17:08:20 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:19.616 17:08:20 rpc -- scripts/common.sh@353 -- # local d=1 00:07:19.616 17:08:20 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.616 17:08:20 rpc -- scripts/common.sh@355 -- # echo 1 00:07:19.616 17:08:20 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.616 17:08:20 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:19.616 17:08:20 rpc -- scripts/common.sh@353 -- # local d=2 00:07:19.616 17:08:20 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.616 17:08:20 rpc -- scripts/common.sh@355 -- # echo 2 00:07:19.617 17:08:20 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.617 17:08:20 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.617 17:08:20 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.617 17:08:20 rpc -- scripts/common.sh@368 -- # return 0 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:19.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.617 --rc genhtml_branch_coverage=1 00:07:19.617 --rc genhtml_function_coverage=1 00:07:19.617 --rc genhtml_legend=1 00:07:19.617 --rc geninfo_all_blocks=1 00:07:19.617 --rc geninfo_unexecuted_blocks=1 00:07:19.617 00:07:19.617 ' 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:19.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.617 --rc genhtml_branch_coverage=1 00:07:19.617 --rc genhtml_function_coverage=1 00:07:19.617 --rc genhtml_legend=1 00:07:19.617 --rc geninfo_all_blocks=1 00:07:19.617 --rc geninfo_unexecuted_blocks=1 00:07:19.617 00:07:19.617 ' 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:19.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.617 --rc genhtml_branch_coverage=1 00:07:19.617 --rc genhtml_function_coverage=1 00:07:19.617 --rc genhtml_legend=1 00:07:19.617 --rc geninfo_all_blocks=1 00:07:19.617 --rc geninfo_unexecuted_blocks=1 00:07:19.617 00:07:19.617 ' 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:19.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.617 --rc genhtml_branch_coverage=1 00:07:19.617 --rc genhtml_function_coverage=1 00:07:19.617 --rc genhtml_legend=1 00:07:19.617 --rc geninfo_all_blocks=1 00:07:19.617 --rc geninfo_unexecuted_blocks=1 00:07:19.617 00:07:19.617 ' 00:07:19.617 17:08:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56731 00:07:19.617 17:08:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:19.617 17:08:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56731 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@833 -- # '[' -z 56731 ']' 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:19.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:19.617 17:08:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.617 17:08:20 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:19.875 [2024-11-04 17:08:20.460479] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:19.876 [2024-11-04 17:08:20.460612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56731 ] 00:07:19.876 [2024-11-04 17:08:20.613687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.134 [2024-11-04 17:08:20.683924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:20.134 [2024-11-04 17:08:20.683997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56731' to capture a snapshot of events at runtime. 00:07:20.134 [2024-11-04 17:08:20.684023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.134 [2024-11-04 17:08:20.684034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.134 [2024-11-04 17:08:20.684042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56731 for offline analysis/debug. 00:07:20.134 [2024-11-04 17:08:20.684574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.134 [2024-11-04 17:08:20.769401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.702 17:08:21 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:20.702 17:08:21 rpc -- common/autotest_common.sh@866 -- # return 0 00:07:20.702 17:08:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:20.702 17:08:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:20.702 17:08:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:20.702 17:08:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:20.702 17:08:21 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:20.702 17:08:21 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.961 17:08:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 ************************************ 00:07:20.961 START TEST rpc_integrity 00:07:20.961 ************************************ 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:20.961 { 00:07:20.961 "name": "Malloc0", 00:07:20.961 "aliases": [ 00:07:20.961 "e1dffef2-6bdd-47ee-a30d-74c013e3ea17" 00:07:20.961 ], 00:07:20.961 "product_name": "Malloc disk", 00:07:20.961 "block_size": 512, 00:07:20.961 "num_blocks": 16384, 00:07:20.961 "uuid": "e1dffef2-6bdd-47ee-a30d-74c013e3ea17", 00:07:20.961 "assigned_rate_limits": { 00:07:20.961 "rw_ios_per_sec": 0, 00:07:20.961 "rw_mbytes_per_sec": 0, 00:07:20.961 "r_mbytes_per_sec": 0, 00:07:20.961 "w_mbytes_per_sec": 0 00:07:20.961 }, 00:07:20.961 "claimed": false, 00:07:20.961 "zoned": false, 00:07:20.961 "supported_io_types": { 00:07:20.961 "read": true, 00:07:20.961 "write": true, 00:07:20.961 "unmap": true, 00:07:20.961 "flush": true, 00:07:20.961 "reset": true, 00:07:20.961 "nvme_admin": false, 00:07:20.961 "nvme_io": false, 00:07:20.961 "nvme_io_md": false, 00:07:20.961 "write_zeroes": true, 00:07:20.961 "zcopy": true, 00:07:20.961 "get_zone_info": false, 00:07:20.961 "zone_management": false, 00:07:20.961 "zone_append": false, 00:07:20.961 "compare": false, 00:07:20.961 "compare_and_write": false, 00:07:20.961 "abort": true, 00:07:20.961 "seek_hole": false, 00:07:20.961 "seek_data": false, 00:07:20.961 "copy": true, 00:07:20.961 "nvme_iov_md": false 00:07:20.961 }, 00:07:20.961 "memory_domains": [ 00:07:20.961 { 00:07:20.961 "dma_device_id": "system", 00:07:20.961 "dma_device_type": 1 00:07:20.961 }, 00:07:20.961 { 00:07:20.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.961 "dma_device_type": 2 00:07:20.961 } 00:07:20.961 ], 00:07:20.961 "driver_specific": {} 00:07:20.961 } 00:07:20.961 ]' 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 [2024-11-04 17:08:21.672375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:20.961 [2024-11-04 17:08:21.672426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.961 [2024-11-04 17:08:21.672446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x652f10 00:07:20.961 [2024-11-04 17:08:21.672456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.961 [2024-11-04 17:08:21.674211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.961 [2024-11-04 17:08:21.674255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:20.961 Passthru0 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:20.961 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.961 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:20.961 { 00:07:20.961 "name": "Malloc0", 00:07:20.961 "aliases": [ 00:07:20.961 "e1dffef2-6bdd-47ee-a30d-74c013e3ea17" 00:07:20.961 ], 00:07:20.961 "product_name": "Malloc disk", 00:07:20.961 "block_size": 512, 00:07:20.961 "num_blocks": 16384, 00:07:20.961 "uuid": "e1dffef2-6bdd-47ee-a30d-74c013e3ea17", 00:07:20.961 "assigned_rate_limits": { 00:07:20.961 "rw_ios_per_sec": 0, 00:07:20.961 "rw_mbytes_per_sec": 0, 00:07:20.961 "r_mbytes_per_sec": 0, 00:07:20.961 "w_mbytes_per_sec": 0 00:07:20.961 }, 00:07:20.961 "claimed": true, 00:07:20.961 "claim_type": "exclusive_write", 00:07:20.961 "zoned": false, 00:07:20.961 "supported_io_types": { 00:07:20.961 "read": true, 00:07:20.961 "write": true, 00:07:20.961 "unmap": true, 00:07:20.961 "flush": true, 00:07:20.961 "reset": true, 00:07:20.961 "nvme_admin": false, 00:07:20.961 "nvme_io": false, 00:07:20.961 "nvme_io_md": false, 00:07:20.961 "write_zeroes": true, 00:07:20.961 "zcopy": true, 00:07:20.961 "get_zone_info": false, 00:07:20.961 "zone_management": false, 00:07:20.961 "zone_append": false, 00:07:20.961 "compare": false, 00:07:20.961 "compare_and_write": false, 00:07:20.961 "abort": true, 00:07:20.961 "seek_hole": false, 00:07:20.962 "seek_data": false, 00:07:20.962 "copy": true, 00:07:20.962 "nvme_iov_md": false 00:07:20.962 }, 00:07:20.962 "memory_domains": [ 00:07:20.962 { 00:07:20.962 "dma_device_id": "system", 00:07:20.962 "dma_device_type": 1 00:07:20.962 }, 00:07:20.962 { 00:07:20.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.962 "dma_device_type": 2 00:07:20.962 } 00:07:20.962 ], 00:07:20.962 "driver_specific": {} 00:07:20.962 }, 00:07:20.962 { 00:07:20.962 "name": "Passthru0", 00:07:20.962 "aliases": [ 00:07:20.962 "f3533263-d2de-54a9-a403-a6504f58856a" 00:07:20.962 ], 00:07:20.962 "product_name": "passthru", 00:07:20.962 "block_size": 512, 00:07:20.962 "num_blocks": 16384, 00:07:20.962 "uuid": "f3533263-d2de-54a9-a403-a6504f58856a", 00:07:20.962 "assigned_rate_limits": { 00:07:20.962 "rw_ios_per_sec": 0, 00:07:20.962 "rw_mbytes_per_sec": 0, 00:07:20.962 "r_mbytes_per_sec": 0, 00:07:20.962 "w_mbytes_per_sec": 0 00:07:20.962 }, 00:07:20.962 "claimed": false, 00:07:20.962 "zoned": false, 00:07:20.962 "supported_io_types": { 00:07:20.962 "read": true, 00:07:20.962 "write": true, 00:07:20.962 "unmap": true, 00:07:20.962 "flush": true, 00:07:20.962 "reset": true, 00:07:20.962 "nvme_admin": false, 00:07:20.962 "nvme_io": false, 00:07:20.962 "nvme_io_md": false, 00:07:20.962 "write_zeroes": true, 00:07:20.962 "zcopy": true, 00:07:20.962 "get_zone_info": false, 00:07:20.962 "zone_management": false, 00:07:20.962 "zone_append": false, 00:07:20.962 "compare": false, 00:07:20.962 "compare_and_write": false, 00:07:20.962 "abort": true, 00:07:20.962 "seek_hole": false, 00:07:20.962 "seek_data": false, 00:07:20.962 "copy": true, 00:07:20.962 "nvme_iov_md": false 00:07:20.962 }, 00:07:20.962 "memory_domains": [ 00:07:20.962 { 00:07:20.962 "dma_device_id": "system", 00:07:20.962 "dma_device_type": 1 00:07:20.962 }, 00:07:20.962 { 00:07:20.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.962 "dma_device_type": 2 00:07:20.962 } 00:07:20.962 ], 00:07:20.962 "driver_specific": { 00:07:20.962 "passthru": { 00:07:20.962 "name": "Passthru0", 00:07:20.962 "base_bdev_name": "Malloc0" 00:07:20.962 } 00:07:20.962 } 00:07:20.962 } 00:07:20.962 ]' 00:07:20.962 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:20.962 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:20.962 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:20.962 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.962 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.221 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.221 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.221 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:21.221 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:21.221 17:08:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:21.221 00:07:21.221 real 0m0.331s 00:07:21.221 user 0m0.218s 00:07:21.221 sys 0m0.043s 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.221 ************************************ 00:07:21.221 END TEST rpc_integrity 00:07:21.221 17:08:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 ************************************ 00:07:21.221 17:08:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:21.221 17:08:21 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.221 17:08:21 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.221 17:08:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 ************************************ 00:07:21.221 START TEST rpc_plugins 00:07:21.221 ************************************ 00:07:21.221 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:07:21.221 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:21.221 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.221 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:21.221 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.221 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:21.221 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.222 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:21.222 { 00:07:21.222 "name": "Malloc1", 00:07:21.222 "aliases": [ 00:07:21.222 "a301270e-107a-494b-8ad3-d86d1d100e38" 00:07:21.222 ], 00:07:21.222 "product_name": "Malloc disk", 00:07:21.222 "block_size": 4096, 00:07:21.222 "num_blocks": 256, 00:07:21.222 "uuid": "a301270e-107a-494b-8ad3-d86d1d100e38", 00:07:21.222 "assigned_rate_limits": { 00:07:21.222 "rw_ios_per_sec": 0, 00:07:21.222 "rw_mbytes_per_sec": 0, 00:07:21.222 "r_mbytes_per_sec": 0, 00:07:21.222 "w_mbytes_per_sec": 0 00:07:21.222 }, 00:07:21.222 "claimed": false, 00:07:21.222 "zoned": false, 00:07:21.222 "supported_io_types": { 00:07:21.222 "read": true, 00:07:21.222 "write": true, 00:07:21.222 "unmap": true, 00:07:21.222 "flush": true, 00:07:21.222 "reset": true, 00:07:21.222 "nvme_admin": false, 00:07:21.222 "nvme_io": false, 00:07:21.222 "nvme_io_md": false, 00:07:21.222 "write_zeroes": true, 00:07:21.222 "zcopy": true, 00:07:21.222 "get_zone_info": false, 00:07:21.222 "zone_management": false, 00:07:21.222 "zone_append": false, 00:07:21.222 "compare": false, 00:07:21.222 "compare_and_write": false, 00:07:21.222 "abort": true, 00:07:21.222 "seek_hole": false, 00:07:21.222 "seek_data": false, 00:07:21.222 "copy": true, 00:07:21.222 "nvme_iov_md": false 00:07:21.222 }, 00:07:21.222 "memory_domains": [ 00:07:21.222 { 00:07:21.222 "dma_device_id": "system", 00:07:21.222 "dma_device_type": 1 00:07:21.222 }, 00:07:21.222 { 00:07:21.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.222 "dma_device_type": 2 00:07:21.222 } 00:07:21.222 ], 00:07:21.222 "driver_specific": {} 00:07:21.222 } 00:07:21.222 ]' 00:07:21.222 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:21.222 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:21.222 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.222 17:08:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.222 17:08:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:21.222 17:08:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.222 17:08:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:21.222 17:08:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:21.481 17:08:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:21.481 00:07:21.481 real 0m0.183s 00:07:21.481 user 0m0.129s 00:07:21.481 sys 0m0.019s 00:07:21.481 17:08:22 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.481 ************************************ 00:07:21.481 END TEST rpc_plugins 00:07:21.481 17:08:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:21.481 ************************************ 00:07:21.481 17:08:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:21.481 17:08:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.481 17:08:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.481 17:08:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.481 ************************************ 00:07:21.481 START TEST rpc_trace_cmd_test 00:07:21.481 ************************************ 00:07:21.481 17:08:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:07:21.481 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:21.481 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:21.481 17:08:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.481 17:08:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.481 17:08:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.481 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:21.481 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56731", 00:07:21.481 "tpoint_group_mask": "0x8", 00:07:21.481 "iscsi_conn": { 00:07:21.481 "mask": "0x2", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "scsi": { 00:07:21.481 "mask": "0x4", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "bdev": { 00:07:21.481 "mask": "0x8", 00:07:21.481 "tpoint_mask": "0xffffffffffffffff" 00:07:21.481 }, 00:07:21.481 "nvmf_rdma": { 00:07:21.481 "mask": "0x10", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "nvmf_tcp": { 00:07:21.481 "mask": "0x20", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "ftl": { 00:07:21.481 "mask": "0x40", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "blobfs": { 00:07:21.481 "mask": "0x80", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "dsa": { 00:07:21.481 "mask": "0x200", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "thread": { 00:07:21.481 "mask": "0x400", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "nvme_pcie": { 00:07:21.481 "mask": "0x800", 00:07:21.481 "tpoint_mask": "0x0" 00:07:21.481 }, 00:07:21.481 "iaa": { 00:07:21.482 "mask": "0x1000", 00:07:21.482 "tpoint_mask": "0x0" 00:07:21.482 }, 00:07:21.482 "nvme_tcp": { 00:07:21.482 "mask": "0x2000", 00:07:21.482 "tpoint_mask": "0x0" 00:07:21.482 }, 00:07:21.482 "bdev_nvme": { 00:07:21.482 "mask": "0x4000", 00:07:21.482 "tpoint_mask": "0x0" 00:07:21.482 }, 00:07:21.482 "sock": { 00:07:21.482 "mask": "0x8000", 00:07:21.482 "tpoint_mask": "0x0" 00:07:21.482 }, 00:07:21.482 "blob": { 00:07:21.482 "mask": "0x10000", 00:07:21.482 "tpoint_mask": "0x0" 00:07:21.482 }, 00:07:21.482 "bdev_raid": { 00:07:21.482 "mask": "0x20000", 00:07:21.482 "tpoint_mask": "0x0" 00:07:21.482 }, 00:07:21.482 "scheduler": { 00:07:21.482 "mask": "0x40000", 00:07:21.482 "tpoint_mask": "0x0" 00:07:21.482 } 00:07:21.482 }' 00:07:21.482 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:21.482 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:21.482 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:21.482 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:21.482 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:21.741 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:21.741 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:21.741 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:21.741 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:21.741 17:08:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:21.741 00:07:21.741 real 0m0.303s 00:07:21.741 user 0m0.262s 00:07:21.741 sys 0m0.030s 00:07:21.741 17:08:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.741 ************************************ 00:07:21.741 17:08:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.741 END TEST rpc_trace_cmd_test 00:07:21.741 ************************************ 00:07:21.741 17:08:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:21.741 17:08:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:21.741 17:08:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:21.741 17:08:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.741 17:08:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.741 17:08:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.741 ************************************ 00:07:21.741 START TEST rpc_daemon_integrity 00:07:21.741 ************************************ 00:07:21.741 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:21.741 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:21.741 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.741 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:21.741 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.741 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:21.741 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.000 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:22.000 { 00:07:22.000 "name": "Malloc2", 00:07:22.000 "aliases": [ 00:07:22.000 "2d425e4d-8470-4beb-98a8-8f8b399e5392" 00:07:22.000 ], 00:07:22.000 "product_name": "Malloc disk", 00:07:22.000 "block_size": 512, 00:07:22.000 "num_blocks": 16384, 00:07:22.000 "uuid": "2d425e4d-8470-4beb-98a8-8f8b399e5392", 00:07:22.000 "assigned_rate_limits": { 00:07:22.000 "rw_ios_per_sec": 0, 00:07:22.000 "rw_mbytes_per_sec": 0, 00:07:22.000 "r_mbytes_per_sec": 0, 00:07:22.000 "w_mbytes_per_sec": 0 00:07:22.000 }, 00:07:22.000 "claimed": false, 00:07:22.000 "zoned": false, 00:07:22.000 "supported_io_types": { 00:07:22.000 "read": true, 00:07:22.000 "write": true, 00:07:22.000 "unmap": true, 00:07:22.001 "flush": true, 00:07:22.001 "reset": true, 00:07:22.001 "nvme_admin": false, 00:07:22.001 "nvme_io": false, 00:07:22.001 "nvme_io_md": false, 00:07:22.001 "write_zeroes": true, 00:07:22.001 "zcopy": true, 00:07:22.001 "get_zone_info": false, 00:07:22.001 "zone_management": false, 00:07:22.001 "zone_append": false, 00:07:22.001 "compare": false, 00:07:22.001 "compare_and_write": false, 00:07:22.001 "abort": true, 00:07:22.001 "seek_hole": false, 00:07:22.001 "seek_data": false, 00:07:22.001 "copy": true, 00:07:22.001 "nvme_iov_md": false 00:07:22.001 }, 00:07:22.001 "memory_domains": [ 00:07:22.001 { 00:07:22.001 "dma_device_id": "system", 00:07:22.001 "dma_device_type": 1 00:07:22.001 }, 00:07:22.001 { 00:07:22.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.001 "dma_device_type": 2 00:07:22.001 } 00:07:22.001 ], 00:07:22.001 "driver_specific": {} 00:07:22.001 } 00:07:22.001 ]' 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.001 [2024-11-04 17:08:22.657464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:22.001 [2024-11-04 17:08:22.657518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.001 [2024-11-04 17:08:22.657537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7ed980 00:07:22.001 [2024-11-04 17:08:22.657547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.001 [2024-11-04 17:08:22.659446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.001 [2024-11-04 17:08:22.659504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:22.001 Passthru0 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:22.001 { 00:07:22.001 "name": "Malloc2", 00:07:22.001 "aliases": [ 00:07:22.001 "2d425e4d-8470-4beb-98a8-8f8b399e5392" 00:07:22.001 ], 00:07:22.001 "product_name": "Malloc disk", 00:07:22.001 "block_size": 512, 00:07:22.001 "num_blocks": 16384, 00:07:22.001 "uuid": "2d425e4d-8470-4beb-98a8-8f8b399e5392", 00:07:22.001 "assigned_rate_limits": { 00:07:22.001 "rw_ios_per_sec": 0, 00:07:22.001 "rw_mbytes_per_sec": 0, 00:07:22.001 "r_mbytes_per_sec": 0, 00:07:22.001 "w_mbytes_per_sec": 0 00:07:22.001 }, 00:07:22.001 "claimed": true, 00:07:22.001 "claim_type": "exclusive_write", 00:07:22.001 "zoned": false, 00:07:22.001 "supported_io_types": { 00:07:22.001 "read": true, 00:07:22.001 "write": true, 00:07:22.001 "unmap": true, 00:07:22.001 "flush": true, 00:07:22.001 "reset": true, 00:07:22.001 "nvme_admin": false, 00:07:22.001 "nvme_io": false, 00:07:22.001 "nvme_io_md": false, 00:07:22.001 "write_zeroes": true, 00:07:22.001 "zcopy": true, 00:07:22.001 "get_zone_info": false, 00:07:22.001 "zone_management": false, 00:07:22.001 "zone_append": false, 00:07:22.001 "compare": false, 00:07:22.001 "compare_and_write": false, 00:07:22.001 "abort": true, 00:07:22.001 "seek_hole": false, 00:07:22.001 "seek_data": false, 00:07:22.001 "copy": true, 00:07:22.001 "nvme_iov_md": false 00:07:22.001 }, 00:07:22.001 "memory_domains": [ 00:07:22.001 { 00:07:22.001 "dma_device_id": "system", 00:07:22.001 "dma_device_type": 1 00:07:22.001 }, 00:07:22.001 { 00:07:22.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.001 "dma_device_type": 2 00:07:22.001 } 00:07:22.001 ], 00:07:22.001 "driver_specific": {} 00:07:22.001 }, 00:07:22.001 { 00:07:22.001 "name": "Passthru0", 00:07:22.001 "aliases": [ 00:07:22.001 "b28d8c62-1de3-565f-abb2-b7a78a6492f1" 00:07:22.001 ], 00:07:22.001 "product_name": "passthru", 00:07:22.001 "block_size": 512, 00:07:22.001 "num_blocks": 16384, 00:07:22.001 "uuid": "b28d8c62-1de3-565f-abb2-b7a78a6492f1", 00:07:22.001 "assigned_rate_limits": { 00:07:22.001 "rw_ios_per_sec": 0, 00:07:22.001 "rw_mbytes_per_sec": 0, 00:07:22.001 "r_mbytes_per_sec": 0, 00:07:22.001 "w_mbytes_per_sec": 0 00:07:22.001 }, 00:07:22.001 "claimed": false, 00:07:22.001 "zoned": false, 00:07:22.001 "supported_io_types": { 00:07:22.001 "read": true, 00:07:22.001 "write": true, 00:07:22.001 "unmap": true, 00:07:22.001 "flush": true, 00:07:22.001 "reset": true, 00:07:22.001 "nvme_admin": false, 00:07:22.001 "nvme_io": false, 00:07:22.001 "nvme_io_md": false, 00:07:22.001 "write_zeroes": true, 00:07:22.001 "zcopy": true, 00:07:22.001 "get_zone_info": false, 00:07:22.001 "zone_management": false, 00:07:22.001 "zone_append": false, 00:07:22.001 "compare": false, 00:07:22.001 "compare_and_write": false, 00:07:22.001 "abort": true, 00:07:22.001 "seek_hole": false, 00:07:22.001 "seek_data": false, 00:07:22.001 "copy": true, 00:07:22.001 "nvme_iov_md": false 00:07:22.001 }, 00:07:22.001 "memory_domains": [ 00:07:22.001 { 00:07:22.001 "dma_device_id": "system", 00:07:22.001 "dma_device_type": 1 00:07:22.001 }, 00:07:22.001 { 00:07:22.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.001 "dma_device_type": 2 00:07:22.001 } 00:07:22.001 ], 00:07:22.001 "driver_specific": { 00:07:22.001 "passthru": { 00:07:22.001 "name": "Passthru0", 00:07:22.001 "base_bdev_name": "Malloc2" 00:07:22.001 } 00:07:22.001 } 00:07:22.001 } 00:07:22.001 ]' 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:22.001 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:22.260 17:08:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:22.260 00:07:22.260 real 0m0.327s 00:07:22.260 user 0m0.224s 00:07:22.260 sys 0m0.035s 00:07:22.260 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.260 ************************************ 00:07:22.260 END TEST rpc_daemon_integrity 00:07:22.260 ************************************ 00:07:22.260 17:08:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:22.260 17:08:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:22.260 17:08:22 rpc -- rpc/rpc.sh@84 -- # killprocess 56731 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@952 -- # '[' -z 56731 ']' 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@956 -- # kill -0 56731 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@957 -- # uname 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56731 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:22.260 killing process with pid 56731 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56731' 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@971 -- # kill 56731 00:07:22.260 17:08:22 rpc -- common/autotest_common.sh@976 -- # wait 56731 00:07:22.518 00:07:22.518 real 0m3.079s 00:07:22.518 user 0m4.016s 00:07:22.518 sys 0m0.734s 00:07:22.518 17:08:23 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.518 17:08:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.518 ************************************ 00:07:22.518 END TEST rpc 00:07:22.518 ************************************ 00:07:22.518 17:08:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:22.518 17:08:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.518 17:08:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.518 17:08:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.518 ************************************ 00:07:22.518 START TEST skip_rpc 00:07:22.518 ************************************ 00:07:22.518 17:08:23 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:22.777 * Looking for test storage... 00:07:22.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:22.777 17:08:23 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.777 17:08:23 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.777 17:08:23 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.777 17:08:23 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.777 17:08:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.777 17:08:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.777 17:08:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.777 17:08:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.777 17:08:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.777 17:08:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.778 17:08:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.778 --rc genhtml_branch_coverage=1 00:07:22.778 --rc genhtml_function_coverage=1 00:07:22.778 --rc genhtml_legend=1 00:07:22.778 --rc geninfo_all_blocks=1 00:07:22.778 --rc geninfo_unexecuted_blocks=1 00:07:22.778 00:07:22.778 ' 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.778 --rc genhtml_branch_coverage=1 00:07:22.778 --rc genhtml_function_coverage=1 00:07:22.778 --rc genhtml_legend=1 00:07:22.778 --rc geninfo_all_blocks=1 00:07:22.778 --rc geninfo_unexecuted_blocks=1 00:07:22.778 00:07:22.778 ' 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.778 --rc genhtml_branch_coverage=1 00:07:22.778 --rc genhtml_function_coverage=1 00:07:22.778 --rc genhtml_legend=1 00:07:22.778 --rc geninfo_all_blocks=1 00:07:22.778 --rc geninfo_unexecuted_blocks=1 00:07:22.778 00:07:22.778 ' 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.778 --rc genhtml_branch_coverage=1 00:07:22.778 --rc genhtml_function_coverage=1 00:07:22.778 --rc genhtml_legend=1 00:07:22.778 --rc geninfo_all_blocks=1 00:07:22.778 --rc geninfo_unexecuted_blocks=1 00:07:22.778 00:07:22.778 ' 00:07:22.778 17:08:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:22.778 17:08:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:22.778 17:08:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.778 17:08:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.778 ************************************ 00:07:22.778 START TEST skip_rpc 00:07:22.778 ************************************ 00:07:22.778 17:08:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:07:22.778 17:08:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56937 00:07:22.778 17:08:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.778 17:08:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:22.778 17:08:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:22.778 [2024-11-04 17:08:23.564291] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:22.778 [2024-11-04 17:08:23.564951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56937 ] 00:07:23.041 [2024-11-04 17:08:23.711908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.041 [2024-11-04 17:08:23.768256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.041 [2024-11-04 17:08:23.840752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56937 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56937 ']' 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56937 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56937 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56937' 00:07:28.327 killing process with pid 56937 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56937 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56937 00:07:28.327 00:07:28.327 real 0m5.424s 00:07:28.327 user 0m5.034s 00:07:28.327 sys 0m0.307s 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.327 17:08:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.327 ************************************ 00:07:28.327 END TEST skip_rpc 00:07:28.327 ************************************ 00:07:28.327 17:08:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:28.327 17:08:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:28.327 17:08:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.327 17:08:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.327 ************************************ 00:07:28.327 START TEST skip_rpc_with_json 00:07:28.327 ************************************ 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57018 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57018 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57018 ']' 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.327 17:08:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:28.327 [2024-11-04 17:08:29.043996] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:28.327 [2024-11-04 17:08:29.044109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57018 ] 00:07:28.586 [2024-11-04 17:08:29.187398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.586 [2024-11-04 17:08:29.242357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.586 [2024-11-04 17:08:29.315554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.522 [2024-11-04 17:08:30.033511] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:29.522 request: 00:07:29.522 { 00:07:29.522 "trtype": "tcp", 00:07:29.522 "method": "nvmf_get_transports", 00:07:29.522 "req_id": 1 00:07:29.522 } 00:07:29.522 Got JSON-RPC error response 00:07:29.522 response: 00:07:29.522 { 00:07:29.522 "code": -19, 00:07:29.522 "message": "No such device" 00:07:29.522 } 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.522 [2024-11-04 17:08:30.045650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.522 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.523 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.523 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:29.523 { 00:07:29.523 "subsystems": [ 00:07:29.523 { 00:07:29.523 "subsystem": "fsdev", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "fsdev_set_opts", 00:07:29.523 "params": { 00:07:29.523 "fsdev_io_pool_size": 65535, 00:07:29.523 "fsdev_io_cache_size": 256 00:07:29.523 } 00:07:29.523 } 00:07:29.523 ] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "keyring", 00:07:29.523 "config": [] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "iobuf", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "iobuf_set_options", 00:07:29.523 "params": { 00:07:29.523 "small_pool_count": 8192, 00:07:29.523 "large_pool_count": 1024, 00:07:29.523 "small_bufsize": 8192, 00:07:29.523 "large_bufsize": 135168, 00:07:29.523 "enable_numa": false 00:07:29.523 } 00:07:29.523 } 00:07:29.523 ] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "sock", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "sock_set_default_impl", 00:07:29.523 "params": { 00:07:29.523 "impl_name": "uring" 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "sock_impl_set_options", 00:07:29.523 "params": { 00:07:29.523 "impl_name": "ssl", 00:07:29.523 "recv_buf_size": 4096, 00:07:29.523 "send_buf_size": 4096, 00:07:29.523 "enable_recv_pipe": true, 00:07:29.523 "enable_quickack": false, 00:07:29.523 "enable_placement_id": 0, 00:07:29.523 "enable_zerocopy_send_server": true, 00:07:29.523 "enable_zerocopy_send_client": false, 00:07:29.523 "zerocopy_threshold": 0, 00:07:29.523 "tls_version": 0, 00:07:29.523 "enable_ktls": false 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "sock_impl_set_options", 00:07:29.523 "params": { 00:07:29.523 "impl_name": "posix", 00:07:29.523 "recv_buf_size": 2097152, 00:07:29.523 "send_buf_size": 2097152, 00:07:29.523 "enable_recv_pipe": true, 00:07:29.523 "enable_quickack": false, 00:07:29.523 "enable_placement_id": 0, 00:07:29.523 "enable_zerocopy_send_server": true, 00:07:29.523 "enable_zerocopy_send_client": false, 00:07:29.523 "zerocopy_threshold": 0, 00:07:29.523 "tls_version": 0, 00:07:29.523 "enable_ktls": false 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "sock_impl_set_options", 00:07:29.523 "params": { 00:07:29.523 "impl_name": "uring", 00:07:29.523 "recv_buf_size": 2097152, 00:07:29.523 "send_buf_size": 2097152, 00:07:29.523 "enable_recv_pipe": true, 00:07:29.523 "enable_quickack": false, 00:07:29.523 "enable_placement_id": 0, 00:07:29.523 "enable_zerocopy_send_server": false, 00:07:29.523 "enable_zerocopy_send_client": false, 00:07:29.523 "zerocopy_threshold": 0, 00:07:29.523 "tls_version": 0, 00:07:29.523 "enable_ktls": false 00:07:29.523 } 00:07:29.523 } 00:07:29.523 ] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "vmd", 00:07:29.523 "config": [] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "accel", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "accel_set_options", 00:07:29.523 "params": { 00:07:29.523 "small_cache_size": 128, 00:07:29.523 "large_cache_size": 16, 00:07:29.523 "task_count": 2048, 00:07:29.523 "sequence_count": 2048, 00:07:29.523 "buf_count": 2048 00:07:29.523 } 00:07:29.523 } 00:07:29.523 ] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "bdev", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "bdev_set_options", 00:07:29.523 "params": { 00:07:29.523 "bdev_io_pool_size": 65535, 00:07:29.523 "bdev_io_cache_size": 256, 00:07:29.523 "bdev_auto_examine": true, 00:07:29.523 "iobuf_small_cache_size": 128, 00:07:29.523 "iobuf_large_cache_size": 16 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "bdev_raid_set_options", 00:07:29.523 "params": { 00:07:29.523 "process_window_size_kb": 1024, 00:07:29.523 "process_max_bandwidth_mb_sec": 0 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "bdev_iscsi_set_options", 00:07:29.523 "params": { 00:07:29.523 "timeout_sec": 30 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "bdev_nvme_set_options", 00:07:29.523 "params": { 00:07:29.523 "action_on_timeout": "none", 00:07:29.523 "timeout_us": 0, 00:07:29.523 "timeout_admin_us": 0, 00:07:29.523 "keep_alive_timeout_ms": 10000, 00:07:29.523 "arbitration_burst": 0, 00:07:29.523 "low_priority_weight": 0, 00:07:29.523 "medium_priority_weight": 0, 00:07:29.523 "high_priority_weight": 0, 00:07:29.523 "nvme_adminq_poll_period_us": 10000, 00:07:29.523 "nvme_ioq_poll_period_us": 0, 00:07:29.523 "io_queue_requests": 0, 00:07:29.523 "delay_cmd_submit": true, 00:07:29.523 "transport_retry_count": 4, 00:07:29.523 "bdev_retry_count": 3, 00:07:29.523 "transport_ack_timeout": 0, 00:07:29.523 "ctrlr_loss_timeout_sec": 0, 00:07:29.523 "reconnect_delay_sec": 0, 00:07:29.523 "fast_io_fail_timeout_sec": 0, 00:07:29.523 "disable_auto_failback": false, 00:07:29.523 "generate_uuids": false, 00:07:29.523 "transport_tos": 0, 00:07:29.523 "nvme_error_stat": false, 00:07:29.523 "rdma_srq_size": 0, 00:07:29.523 "io_path_stat": false, 00:07:29.523 "allow_accel_sequence": false, 00:07:29.523 "rdma_max_cq_size": 0, 00:07:29.523 "rdma_cm_event_timeout_ms": 0, 00:07:29.523 "dhchap_digests": [ 00:07:29.523 "sha256", 00:07:29.523 "sha384", 00:07:29.523 "sha512" 00:07:29.523 ], 00:07:29.523 "dhchap_dhgroups": [ 00:07:29.523 "null", 00:07:29.523 "ffdhe2048", 00:07:29.523 "ffdhe3072", 00:07:29.523 "ffdhe4096", 00:07:29.523 "ffdhe6144", 00:07:29.523 "ffdhe8192" 00:07:29.523 ] 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "bdev_nvme_set_hotplug", 00:07:29.523 "params": { 00:07:29.523 "period_us": 100000, 00:07:29.523 "enable": false 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "bdev_wait_for_examine" 00:07:29.523 } 00:07:29.523 ] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "scsi", 00:07:29.523 "config": null 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "scheduler", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "framework_set_scheduler", 00:07:29.523 "params": { 00:07:29.523 "name": "static" 00:07:29.523 } 00:07:29.523 } 00:07:29.523 ] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "vhost_scsi", 00:07:29.523 "config": [] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "vhost_blk", 00:07:29.523 "config": [] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "ublk", 00:07:29.523 "config": [] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "nbd", 00:07:29.523 "config": [] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "nvmf", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "nvmf_set_config", 00:07:29.523 "params": { 00:07:29.523 "discovery_filter": "match_any", 00:07:29.523 "admin_cmd_passthru": { 00:07:29.523 "identify_ctrlr": false 00:07:29.523 }, 00:07:29.523 "dhchap_digests": [ 00:07:29.523 "sha256", 00:07:29.523 "sha384", 00:07:29.523 "sha512" 00:07:29.523 ], 00:07:29.523 "dhchap_dhgroups": [ 00:07:29.523 "null", 00:07:29.523 "ffdhe2048", 00:07:29.523 "ffdhe3072", 00:07:29.523 "ffdhe4096", 00:07:29.523 "ffdhe6144", 00:07:29.523 "ffdhe8192" 00:07:29.523 ] 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "nvmf_set_max_subsystems", 00:07:29.523 "params": { 00:07:29.523 "max_subsystems": 1024 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "nvmf_set_crdt", 00:07:29.523 "params": { 00:07:29.523 "crdt1": 0, 00:07:29.523 "crdt2": 0, 00:07:29.523 "crdt3": 0 00:07:29.523 } 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "method": "nvmf_create_transport", 00:07:29.523 "params": { 00:07:29.523 "trtype": "TCP", 00:07:29.523 "max_queue_depth": 128, 00:07:29.523 "max_io_qpairs_per_ctrlr": 127, 00:07:29.523 "in_capsule_data_size": 4096, 00:07:29.523 "max_io_size": 131072, 00:07:29.523 "io_unit_size": 131072, 00:07:29.523 "max_aq_depth": 128, 00:07:29.523 "num_shared_buffers": 511, 00:07:29.523 "buf_cache_size": 4294967295, 00:07:29.523 "dif_insert_or_strip": false, 00:07:29.523 "zcopy": false, 00:07:29.523 "c2h_success": true, 00:07:29.523 "sock_priority": 0, 00:07:29.523 "abort_timeout_sec": 1, 00:07:29.523 "ack_timeout": 0, 00:07:29.523 "data_wr_pool_size": 0 00:07:29.523 } 00:07:29.523 } 00:07:29.523 ] 00:07:29.523 }, 00:07:29.523 { 00:07:29.523 "subsystem": "iscsi", 00:07:29.523 "config": [ 00:07:29.523 { 00:07:29.523 "method": "iscsi_set_options", 00:07:29.523 "params": { 00:07:29.523 "node_base": "iqn.2016-06.io.spdk", 00:07:29.523 "max_sessions": 128, 00:07:29.523 "max_connections_per_session": 2, 00:07:29.524 "max_queue_depth": 64, 00:07:29.524 "default_time2wait": 2, 00:07:29.524 "default_time2retain": 20, 00:07:29.524 "first_burst_length": 8192, 00:07:29.524 "immediate_data": true, 00:07:29.524 "allow_duplicated_isid": false, 00:07:29.524 "error_recovery_level": 0, 00:07:29.524 "nop_timeout": 60, 00:07:29.524 "nop_in_interval": 30, 00:07:29.524 "disable_chap": false, 00:07:29.524 "require_chap": false, 00:07:29.524 "mutual_chap": false, 00:07:29.524 "chap_group": 0, 00:07:29.524 "max_large_datain_per_connection": 64, 00:07:29.524 "max_r2t_per_connection": 4, 00:07:29.524 "pdu_pool_size": 36864, 00:07:29.524 "immediate_data_pool_size": 16384, 00:07:29.524 "data_out_pool_size": 2048 00:07:29.524 } 00:07:29.524 } 00:07:29.524 ] 00:07:29.524 } 00:07:29.524 ] 00:07:29.524 } 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57018 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57018 ']' 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57018 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57018 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:29.524 killing process with pid 57018 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57018' 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57018 00:07:29.524 17:08:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57018 00:07:30.091 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57051 00:07:30.092 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:30.092 17:08:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57051 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57051 ']' 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57051 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57051 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.391 killing process with pid 57051 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57051' 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57051 00:07:35.391 17:08:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57051 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:35.391 00:07:35.391 real 0m7.091s 00:07:35.391 user 0m6.867s 00:07:35.391 sys 0m0.644s 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:35.391 ************************************ 00:07:35.391 END TEST skip_rpc_with_json 00:07:35.391 ************************************ 00:07:35.391 17:08:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:35.391 17:08:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.391 17:08:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.391 17:08:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.391 ************************************ 00:07:35.391 START TEST skip_rpc_with_delay 00:07:35.391 ************************************ 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:35.391 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:35.391 [2024-11-04 17:08:36.179277] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:35.650 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:35.650 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.650 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.650 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.650 00:07:35.650 real 0m0.074s 00:07:35.650 user 0m0.048s 00:07:35.650 sys 0m0.024s 00:07:35.650 ************************************ 00:07:35.650 END TEST skip_rpc_with_delay 00:07:35.650 ************************************ 00:07:35.650 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.650 17:08:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:35.650 17:08:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:35.650 17:08:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:35.650 17:08:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:35.650 17:08:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.650 17:08:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.650 17:08:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.650 ************************************ 00:07:35.650 START TEST exit_on_failed_rpc_init 00:07:35.650 ************************************ 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57161 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57161 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57161 ']' 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.650 17:08:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:35.650 [2024-11-04 17:08:36.324397] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:35.650 [2024-11-04 17:08:36.324518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57161 ] 00:07:35.910 [2024-11-04 17:08:36.472115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.910 [2024-11-04 17:08:36.531712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.910 [2024-11-04 17:08:36.604063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:36.478 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:36.738 [2024-11-04 17:08:37.351152] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:36.738 [2024-11-04 17:08:37.352069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57179 ] 00:07:36.738 [2024-11-04 17:08:37.503763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.997 [2024-11-04 17:08:37.567006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.997 [2024-11-04 17:08:37.567618] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:36.997 [2024-11-04 17:08:37.567758] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:36.997 [2024-11-04 17:08:37.567850] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57161 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57161 ']' 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57161 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57161 00:07:36.997 killing process with pid 57161 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57161' 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57161 00:07:36.997 17:08:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57161 00:07:37.565 00:07:37.565 real 0m1.820s 00:07:37.565 user 0m2.081s 00:07:37.565 sys 0m0.429s 00:07:37.565 ************************************ 00:07:37.565 END TEST exit_on_failed_rpc_init 00:07:37.565 ************************************ 00:07:37.565 17:08:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.565 17:08:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:37.565 17:08:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:37.565 00:07:37.565 real 0m14.802s 00:07:37.565 user 0m14.208s 00:07:37.565 sys 0m1.609s 00:07:37.565 ************************************ 00:07:37.565 END TEST skip_rpc 00:07:37.565 ************************************ 00:07:37.565 17:08:38 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.565 17:08:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.565 17:08:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:37.565 17:08:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:37.565 17:08:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.565 17:08:38 -- common/autotest_common.sh@10 -- # set +x 00:07:37.565 ************************************ 00:07:37.565 START TEST rpc_client 00:07:37.565 ************************************ 00:07:37.565 17:08:38 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:37.565 * Looking for test storage... 00:07:37.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:37.565 17:08:38 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:37.565 17:08:38 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:37.565 17:08:38 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:37.565 17:08:38 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:37.565 17:08:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.566 17:08:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:37.566 17:08:38 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.566 17:08:38 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:37.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.566 --rc genhtml_branch_coverage=1 00:07:37.566 --rc genhtml_function_coverage=1 00:07:37.566 --rc genhtml_legend=1 00:07:37.566 --rc geninfo_all_blocks=1 00:07:37.566 --rc geninfo_unexecuted_blocks=1 00:07:37.566 00:07:37.566 ' 00:07:37.566 17:08:38 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:37.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.566 --rc genhtml_branch_coverage=1 00:07:37.566 --rc genhtml_function_coverage=1 00:07:37.566 --rc genhtml_legend=1 00:07:37.566 --rc geninfo_all_blocks=1 00:07:37.566 --rc geninfo_unexecuted_blocks=1 00:07:37.566 00:07:37.566 ' 00:07:37.566 17:08:38 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:37.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.566 --rc genhtml_branch_coverage=1 00:07:37.566 --rc genhtml_function_coverage=1 00:07:37.566 --rc genhtml_legend=1 00:07:37.566 --rc geninfo_all_blocks=1 00:07:37.566 --rc geninfo_unexecuted_blocks=1 00:07:37.566 00:07:37.566 ' 00:07:37.566 17:08:38 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:37.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.566 --rc genhtml_branch_coverage=1 00:07:37.566 --rc genhtml_function_coverage=1 00:07:37.566 --rc genhtml_legend=1 00:07:37.566 --rc geninfo_all_blocks=1 00:07:37.566 --rc geninfo_unexecuted_blocks=1 00:07:37.566 00:07:37.566 ' 00:07:37.566 17:08:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:37.825 OK 00:07:37.825 17:08:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:37.825 00:07:37.825 real 0m0.210s 00:07:37.825 user 0m0.127s 00:07:37.825 sys 0m0.091s 00:07:37.825 ************************************ 00:07:37.825 END TEST rpc_client 00:07:37.825 ************************************ 00:07:37.825 17:08:38 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:37.825 17:08:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:37.825 17:08:38 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:37.825 17:08:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:37.825 17:08:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:37.825 17:08:38 -- common/autotest_common.sh@10 -- # set +x 00:07:37.825 ************************************ 00:07:37.825 START TEST json_config 00:07:37.825 ************************************ 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.825 17:08:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.825 17:08:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.825 17:08:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.825 17:08:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.825 17:08:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.825 17:08:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.825 17:08:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.825 17:08:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:37.825 17:08:38 json_config -- scripts/common.sh@345 -- # : 1 00:07:37.825 17:08:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.825 17:08:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.825 17:08:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:37.825 17:08:38 json_config -- scripts/common.sh@353 -- # local d=1 00:07:37.825 17:08:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.825 17:08:38 json_config -- scripts/common.sh@355 -- # echo 1 00:07:37.825 17:08:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.825 17:08:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@353 -- # local d=2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.825 17:08:38 json_config -- scripts/common.sh@355 -- # echo 2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.825 17:08:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.825 17:08:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.825 17:08:38 json_config -- scripts/common.sh@368 -- # return 0 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:37.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.825 --rc genhtml_branch_coverage=1 00:07:37.825 --rc genhtml_function_coverage=1 00:07:37.825 --rc genhtml_legend=1 00:07:37.825 --rc geninfo_all_blocks=1 00:07:37.825 --rc geninfo_unexecuted_blocks=1 00:07:37.825 00:07:37.825 ' 00:07:37.825 17:08:38 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:37.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.825 --rc genhtml_branch_coverage=1 00:07:37.825 --rc genhtml_function_coverage=1 00:07:37.825 --rc genhtml_legend=1 00:07:37.825 --rc geninfo_all_blocks=1 00:07:37.825 --rc geninfo_unexecuted_blocks=1 00:07:37.825 00:07:37.825 ' 00:07:37.826 17:08:38 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:37.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.826 --rc genhtml_branch_coverage=1 00:07:37.826 --rc genhtml_function_coverage=1 00:07:37.826 --rc genhtml_legend=1 00:07:37.826 --rc geninfo_all_blocks=1 00:07:37.826 --rc geninfo_unexecuted_blocks=1 00:07:37.826 00:07:37.826 ' 00:07:37.826 17:08:38 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:37.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.826 --rc genhtml_branch_coverage=1 00:07:37.826 --rc genhtml_function_coverage=1 00:07:37.826 --rc genhtml_legend=1 00:07:37.826 --rc geninfo_all_blocks=1 00:07:37.826 --rc geninfo_unexecuted_blocks=1 00:07:37.826 00:07:37.826 ' 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.826 17:08:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.826 17:08:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.826 17:08:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.826 17:08:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.826 17:08:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.826 17:08:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.826 17:08:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.826 17:08:38 json_config -- paths/export.sh@5 -- # export PATH 00:07:37.826 17:08:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@51 -- # : 0 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.826 17:08:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:37.826 INFO: JSON configuration test init 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:37.826 17:08:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.826 17:08:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.826 17:08:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:37.826 17:08:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:37.826 17:08:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.085 17:08:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:38.085 17:08:38 json_config -- json_config/common.sh@9 -- # local app=target 00:07:38.085 17:08:38 json_config -- json_config/common.sh@10 -- # shift 00:07:38.085 17:08:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:38.085 17:08:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:38.085 17:08:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:38.085 17:08:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.085 17:08:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.085 17:08:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57313 00:07:38.085 17:08:38 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:38.085 17:08:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:38.085 Waiting for target to run... 00:07:38.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:38.085 17:08:38 json_config -- json_config/common.sh@25 -- # waitforlisten 57313 /var/tmp/spdk_tgt.sock 00:07:38.085 17:08:38 json_config -- common/autotest_common.sh@833 -- # '[' -z 57313 ']' 00:07:38.085 17:08:38 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:38.085 17:08:38 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.085 17:08:38 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:38.085 17:08:38 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.085 17:08:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.085 [2024-11-04 17:08:38.705025] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:38.085 [2024-11-04 17:08:38.705316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57313 ] 00:07:38.653 [2024-11-04 17:08:39.153112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.653 [2024-11-04 17:08:39.199975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.912 17:08:39 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.912 17:08:39 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:38.912 00:07:38.912 17:08:39 json_config -- json_config/common.sh@26 -- # echo '' 00:07:38.912 17:08:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:38.912 17:08:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:38.912 17:08:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.912 17:08:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.912 17:08:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:38.912 17:08:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:38.912 17:08:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.912 17:08:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:39.179 17:08:39 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:39.179 17:08:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:39.179 17:08:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:39.437 [2024-11-04 17:08:40.042847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.726 17:08:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:39.726 17:08:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:39.726 17:08:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.726 17:08:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:39.726 17:08:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:39.726 17:08:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:39.726 17:08:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:39.726 17:08:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:39.726 17:08:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:39.727 17:08:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:39.727 17:08:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:39.727 17:08:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@54 -- # sort 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:40.027 17:08:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:40.027 17:08:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:40.027 17:08:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:40.028 17:08:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:40.028 17:08:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:40.028 17:08:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:40.028 17:08:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:40.287 MallocForNvmf0 00:07:40.287 17:08:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:40.287 17:08:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:40.547 MallocForNvmf1 00:07:40.547 17:08:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:40.547 17:08:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:40.806 [2024-11-04 17:08:41.459289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.806 17:08:41 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:40.806 17:08:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.065 17:08:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:41.065 17:08:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:41.325 17:08:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:41.325 17:08:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:41.584 17:08:42 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:41.584 17:08:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:41.584 [2024-11-04 17:08:42.371835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:41.842 17:08:42 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:41.842 17:08:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.842 17:08:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:41.842 17:08:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:41.842 17:08:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.842 17:08:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:41.842 17:08:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:41.842 17:08:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:41.842 17:08:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:42.101 MallocBdevForConfigChangeCheck 00:07:42.101 17:08:42 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:42.101 17:08:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.101 17:08:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:42.101 17:08:42 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:42.101 17:08:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:42.670 INFO: shutting down applications... 00:07:42.670 17:08:43 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:42.670 17:08:43 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:42.670 17:08:43 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:42.670 17:08:43 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:42.670 17:08:43 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:42.929 Calling clear_iscsi_subsystem 00:07:42.929 Calling clear_nvmf_subsystem 00:07:42.929 Calling clear_nbd_subsystem 00:07:42.929 Calling clear_ublk_subsystem 00:07:42.929 Calling clear_vhost_blk_subsystem 00:07:42.929 Calling clear_vhost_scsi_subsystem 00:07:42.929 Calling clear_bdev_subsystem 00:07:42.929 17:08:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:42.929 17:08:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:42.929 17:08:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:42.929 17:08:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:42.929 17:08:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:42.929 17:08:43 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:43.497 17:08:44 json_config -- json_config/json_config.sh@352 -- # break 00:07:43.497 17:08:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:43.497 17:08:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:43.497 17:08:44 json_config -- json_config/common.sh@31 -- # local app=target 00:07:43.497 17:08:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:43.497 17:08:44 json_config -- json_config/common.sh@35 -- # [[ -n 57313 ]] 00:07:43.497 17:08:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57313 00:07:43.497 17:08:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:43.497 17:08:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.497 17:08:44 json_config -- json_config/common.sh@41 -- # kill -0 57313 00:07:43.497 17:08:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.756 17:08:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.756 17:08:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.756 17:08:44 json_config -- json_config/common.sh@41 -- # kill -0 57313 00:07:43.756 SPDK target shutdown done 00:07:43.756 INFO: relaunching applications... 00:07:43.756 17:08:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:43.756 17:08:44 json_config -- json_config/common.sh@43 -- # break 00:07:43.756 17:08:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:43.756 17:08:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:43.756 17:08:44 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:43.756 17:08:44 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:43.756 17:08:44 json_config -- json_config/common.sh@9 -- # local app=target 00:07:43.756 17:08:44 json_config -- json_config/common.sh@10 -- # shift 00:07:43.756 17:08:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:43.756 17:08:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:43.756 17:08:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:43.756 17:08:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:43.756 17:08:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:43.756 17:08:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57514 00:07:43.756 17:08:44 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:43.756 17:08:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:43.756 Waiting for target to run... 00:07:43.756 17:08:44 json_config -- json_config/common.sh@25 -- # waitforlisten 57514 /var/tmp/spdk_tgt.sock 00:07:43.756 17:08:44 json_config -- common/autotest_common.sh@833 -- # '[' -z 57514 ']' 00:07:43.756 17:08:44 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:43.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:43.756 17:08:44 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.756 17:08:44 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:43.757 17:08:44 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.757 17:08:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:44.015 [2024-11-04 17:08:44.574257] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:44.015 [2024-11-04 17:08:44.574358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57514 ] 00:07:44.274 [2024-11-04 17:08:44.987562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.274 [2024-11-04 17:08:45.036000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.533 [2024-11-04 17:08:45.171885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.792 [2024-11-04 17:08:45.388080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.792 [2024-11-04 17:08:45.420145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:44.792 17:08:45 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.792 17:08:45 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:44.792 17:08:45 json_config -- json_config/common.sh@26 -- # echo '' 00:07:44.792 00:07:44.792 INFO: Checking if target configuration is the same... 00:07:44.792 17:08:45 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:44.792 17:08:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:44.792 17:08:45 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:44.792 17:08:45 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:44.792 17:08:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:44.792 + '[' 2 -ne 2 ']' 00:07:44.792 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:45.050 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:45.051 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:45.051 +++ basename /dev/fd/62 00:07:45.051 ++ mktemp /tmp/62.XXX 00:07:45.051 + tmp_file_1=/tmp/62.5D6 00:07:45.051 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:45.051 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:45.051 + tmp_file_2=/tmp/spdk_tgt_config.json.Rgo 00:07:45.051 + ret=0 00:07:45.051 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:45.309 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:45.309 + diff -u /tmp/62.5D6 /tmp/spdk_tgt_config.json.Rgo 00:07:45.309 INFO: JSON config files are the same 00:07:45.309 + echo 'INFO: JSON config files are the same' 00:07:45.309 + rm /tmp/62.5D6 /tmp/spdk_tgt_config.json.Rgo 00:07:45.309 + exit 0 00:07:45.309 INFO: changing configuration and checking if this can be detected... 00:07:45.309 17:08:46 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:45.309 17:08:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:45.309 17:08:46 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:45.309 17:08:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:45.568 17:08:46 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:45.568 17:08:46 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:45.568 17:08:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:45.568 + '[' 2 -ne 2 ']' 00:07:45.568 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:45.568 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:45.568 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:45.568 +++ basename /dev/fd/62 00:07:45.568 ++ mktemp /tmp/62.XXX 00:07:45.568 + tmp_file_1=/tmp/62.iwE 00:07:45.568 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:45.568 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:45.568 + tmp_file_2=/tmp/spdk_tgt_config.json.kEh 00:07:45.568 + ret=0 00:07:45.568 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:46.136 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:46.136 + diff -u /tmp/62.iwE /tmp/spdk_tgt_config.json.kEh 00:07:46.136 + ret=1 00:07:46.136 + echo '=== Start of file: /tmp/62.iwE ===' 00:07:46.136 + cat /tmp/62.iwE 00:07:46.136 + echo '=== End of file: /tmp/62.iwE ===' 00:07:46.136 + echo '' 00:07:46.136 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kEh ===' 00:07:46.136 + cat /tmp/spdk_tgt_config.json.kEh 00:07:46.136 + echo '=== End of file: /tmp/spdk_tgt_config.json.kEh ===' 00:07:46.136 + echo '' 00:07:46.136 + rm /tmp/62.iwE /tmp/spdk_tgt_config.json.kEh 00:07:46.136 + exit 1 00:07:46.136 INFO: configuration change detected. 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@324 -- # [[ -n 57514 ]] 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.136 17:08:46 json_config -- json_config/json_config.sh@330 -- # killprocess 57514 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@952 -- # '[' -z 57514 ']' 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@956 -- # kill -0 57514 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@957 -- # uname 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57514 00:07:46.136 killing process with pid 57514 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57514' 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@971 -- # kill 57514 00:07:46.136 17:08:46 json_config -- common/autotest_common.sh@976 -- # wait 57514 00:07:46.394 17:08:47 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:46.394 17:08:47 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:46.394 17:08:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.394 17:08:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.394 INFO: Success 00:07:46.394 17:08:47 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:46.394 17:08:47 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:46.394 ************************************ 00:07:46.394 END TEST json_config 00:07:46.394 ************************************ 00:07:46.394 00:07:46.394 real 0m8.727s 00:07:46.394 user 0m12.524s 00:07:46.394 sys 0m1.739s 00:07:46.394 17:08:47 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.394 17:08:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.394 17:08:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:46.394 17:08:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:46.394 17:08:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.394 17:08:47 -- common/autotest_common.sh@10 -- # set +x 00:07:46.654 ************************************ 00:07:46.654 START TEST json_config_extra_key 00:07:46.654 ************************************ 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.654 17:08:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:46.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.654 --rc genhtml_branch_coverage=1 00:07:46.654 --rc genhtml_function_coverage=1 00:07:46.654 --rc genhtml_legend=1 00:07:46.654 --rc geninfo_all_blocks=1 00:07:46.654 --rc geninfo_unexecuted_blocks=1 00:07:46.654 00:07:46.654 ' 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:46.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.654 --rc genhtml_branch_coverage=1 00:07:46.654 --rc genhtml_function_coverage=1 00:07:46.654 --rc genhtml_legend=1 00:07:46.654 --rc geninfo_all_blocks=1 00:07:46.654 --rc geninfo_unexecuted_blocks=1 00:07:46.654 00:07:46.654 ' 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:46.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.654 --rc genhtml_branch_coverage=1 00:07:46.654 --rc genhtml_function_coverage=1 00:07:46.654 --rc genhtml_legend=1 00:07:46.654 --rc geninfo_all_blocks=1 00:07:46.654 --rc geninfo_unexecuted_blocks=1 00:07:46.654 00:07:46.654 ' 00:07:46.654 17:08:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:46.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.654 --rc genhtml_branch_coverage=1 00:07:46.654 --rc genhtml_function_coverage=1 00:07:46.654 --rc genhtml_legend=1 00:07:46.654 --rc geninfo_all_blocks=1 00:07:46.654 --rc geninfo_unexecuted_blocks=1 00:07:46.654 00:07:46.654 ' 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.655 17:08:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.655 17:08:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.655 17:08:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.655 17:08:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.655 17:08:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.655 17:08:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.655 17:08:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.655 17:08:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:46.655 17:08:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.655 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.655 17:08:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.655 INFO: launching applications... 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:46.655 17:08:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57662 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:46.655 Waiting for target to run... 00:07:46.655 17:08:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57662 /var/tmp/spdk_tgt.sock 00:07:46.655 17:08:47 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57662 ']' 00:07:46.655 17:08:47 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:46.655 17:08:47 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.655 17:08:47 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:46.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:46.655 17:08:47 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.655 17:08:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:46.914 [2024-11-04 17:08:47.459581] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:46.914 [2024-11-04 17:08:47.459914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57662 ] 00:07:47.172 [2024-11-04 17:08:47.904737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.172 [2024-11-04 17:08:47.948336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.431 [2024-11-04 17:08:47.979871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.689 17:08:48 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.689 17:08:48 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:07:47.689 00:07:47.689 INFO: shutting down applications... 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:47.690 17:08:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:47.690 17:08:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57662 ]] 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57662 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57662 00:07:47.690 17:08:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:48.257 17:08:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:48.257 17:08:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:48.257 17:08:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57662 00:07:48.257 17:08:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:48.257 17:08:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:48.257 17:08:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:48.257 17:08:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:48.257 SPDK target shutdown done 00:07:48.257 Success 00:07:48.257 17:08:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:48.257 ************************************ 00:07:48.257 END TEST json_config_extra_key 00:07:48.257 ************************************ 00:07:48.257 00:07:48.257 real 0m1.718s 00:07:48.257 user 0m1.553s 00:07:48.257 sys 0m0.467s 00:07:48.257 17:08:48 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.257 17:08:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:48.257 17:08:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:48.257 17:08:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:48.257 17:08:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.257 17:08:48 -- common/autotest_common.sh@10 -- # set +x 00:07:48.257 ************************************ 00:07:48.257 START TEST alias_rpc 00:07:48.257 ************************************ 00:07:48.257 17:08:48 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:48.257 * Looking for test storage... 00:07:48.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:48.257 17:08:49 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.257 17:08:49 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.257 17:08:49 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.516 17:08:49 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.516 17:08:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:48.516 17:08:49 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.516 17:08:49 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:48.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.516 --rc genhtml_branch_coverage=1 00:07:48.516 --rc genhtml_function_coverage=1 00:07:48.516 --rc genhtml_legend=1 00:07:48.516 --rc geninfo_all_blocks=1 00:07:48.516 --rc geninfo_unexecuted_blocks=1 00:07:48.516 00:07:48.516 ' 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.517 --rc genhtml_branch_coverage=1 00:07:48.517 --rc genhtml_function_coverage=1 00:07:48.517 --rc genhtml_legend=1 00:07:48.517 --rc geninfo_all_blocks=1 00:07:48.517 --rc geninfo_unexecuted_blocks=1 00:07:48.517 00:07:48.517 ' 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.517 --rc genhtml_branch_coverage=1 00:07:48.517 --rc genhtml_function_coverage=1 00:07:48.517 --rc genhtml_legend=1 00:07:48.517 --rc geninfo_all_blocks=1 00:07:48.517 --rc geninfo_unexecuted_blocks=1 00:07:48.517 00:07:48.517 ' 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:48.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.517 --rc genhtml_branch_coverage=1 00:07:48.517 --rc genhtml_function_coverage=1 00:07:48.517 --rc genhtml_legend=1 00:07:48.517 --rc geninfo_all_blocks=1 00:07:48.517 --rc geninfo_unexecuted_blocks=1 00:07:48.517 00:07:48.517 ' 00:07:48.517 17:08:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:48.517 17:08:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57735 00:07:48.517 17:08:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:48.517 17:08:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57735 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57735 ']' 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:48.517 17:08:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.517 [2024-11-04 17:08:49.208290] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:48.517 [2024-11-04 17:08:49.208557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57735 ] 00:07:48.775 [2024-11-04 17:08:49.351939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.776 [2024-11-04 17:08:49.412159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.776 [2024-11-04 17:08:49.483013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.038 17:08:49 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.038 17:08:49 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:49.038 17:08:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:49.297 17:08:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57735 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57735 ']' 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57735 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57735 00:07:49.297 killing process with pid 57735 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57735' 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@971 -- # kill 57735 00:07:49.297 17:08:50 alias_rpc -- common/autotest_common.sh@976 -- # wait 57735 00:07:49.865 ************************************ 00:07:49.865 END TEST alias_rpc 00:07:49.865 ************************************ 00:07:49.865 00:07:49.865 real 0m1.455s 00:07:49.865 user 0m1.569s 00:07:49.865 sys 0m0.417s 00:07:49.865 17:08:50 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.865 17:08:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.865 17:08:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:49.865 17:08:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:49.865 17:08:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.865 17:08:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.865 17:08:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.865 ************************************ 00:07:49.866 START TEST spdkcli_tcp 00:07:49.866 ************************************ 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:49.866 * Looking for test storage... 00:07:49.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.866 17:08:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:49.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.866 --rc genhtml_branch_coverage=1 00:07:49.866 --rc genhtml_function_coverage=1 00:07:49.866 --rc genhtml_legend=1 00:07:49.866 --rc geninfo_all_blocks=1 00:07:49.866 --rc geninfo_unexecuted_blocks=1 00:07:49.866 00:07:49.866 ' 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:49.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.866 --rc genhtml_branch_coverage=1 00:07:49.866 --rc genhtml_function_coverage=1 00:07:49.866 --rc genhtml_legend=1 00:07:49.866 --rc geninfo_all_blocks=1 00:07:49.866 --rc geninfo_unexecuted_blocks=1 00:07:49.866 00:07:49.866 ' 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:49.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.866 --rc genhtml_branch_coverage=1 00:07:49.866 --rc genhtml_function_coverage=1 00:07:49.866 --rc genhtml_legend=1 00:07:49.866 --rc geninfo_all_blocks=1 00:07:49.866 --rc geninfo_unexecuted_blocks=1 00:07:49.866 00:07:49.866 ' 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:49.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.866 --rc genhtml_branch_coverage=1 00:07:49.866 --rc genhtml_function_coverage=1 00:07:49.866 --rc genhtml_legend=1 00:07:49.866 --rc geninfo_all_blocks=1 00:07:49.866 --rc geninfo_unexecuted_blocks=1 00:07:49.866 00:07:49.866 ' 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57817 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:49.866 17:08:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57817 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57817 ']' 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.866 17:08:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.125 [2024-11-04 17:08:50.749507] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:50.125 [2024-11-04 17:08:50.749669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57817 ] 00:07:50.125 [2024-11-04 17:08:50.897961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.384 [2024-11-04 17:08:50.954541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.384 [2024-11-04 17:08:50.954550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.384 [2024-11-04 17:08:51.023263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.951 17:08:51 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.951 17:08:51 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:07:50.951 17:08:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57834 00:07:50.951 17:08:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:50.951 17:08:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:51.210 [ 00:07:51.210 "bdev_malloc_delete", 00:07:51.210 "bdev_malloc_create", 00:07:51.210 "bdev_null_resize", 00:07:51.210 "bdev_null_delete", 00:07:51.210 "bdev_null_create", 00:07:51.210 "bdev_nvme_cuse_unregister", 00:07:51.210 "bdev_nvme_cuse_register", 00:07:51.210 "bdev_opal_new_user", 00:07:51.210 "bdev_opal_set_lock_state", 00:07:51.210 "bdev_opal_delete", 00:07:51.210 "bdev_opal_get_info", 00:07:51.210 "bdev_opal_create", 00:07:51.210 "bdev_nvme_opal_revert", 00:07:51.210 "bdev_nvme_opal_init", 00:07:51.210 "bdev_nvme_send_cmd", 00:07:51.210 "bdev_nvme_set_keys", 00:07:51.210 "bdev_nvme_get_path_iostat", 00:07:51.210 "bdev_nvme_get_mdns_discovery_info", 00:07:51.210 "bdev_nvme_stop_mdns_discovery", 00:07:51.210 "bdev_nvme_start_mdns_discovery", 00:07:51.210 "bdev_nvme_set_multipath_policy", 00:07:51.210 "bdev_nvme_set_preferred_path", 00:07:51.210 "bdev_nvme_get_io_paths", 00:07:51.210 "bdev_nvme_remove_error_injection", 00:07:51.210 "bdev_nvme_add_error_injection", 00:07:51.210 "bdev_nvme_get_discovery_info", 00:07:51.210 "bdev_nvme_stop_discovery", 00:07:51.210 "bdev_nvme_start_discovery", 00:07:51.210 "bdev_nvme_get_controller_health_info", 00:07:51.210 "bdev_nvme_disable_controller", 00:07:51.210 "bdev_nvme_enable_controller", 00:07:51.210 "bdev_nvme_reset_controller", 00:07:51.210 "bdev_nvme_get_transport_statistics", 00:07:51.210 "bdev_nvme_apply_firmware", 00:07:51.210 "bdev_nvme_detach_controller", 00:07:51.210 "bdev_nvme_get_controllers", 00:07:51.210 "bdev_nvme_attach_controller", 00:07:51.210 "bdev_nvme_set_hotplug", 00:07:51.210 "bdev_nvme_set_options", 00:07:51.210 "bdev_passthru_delete", 00:07:51.210 "bdev_passthru_create", 00:07:51.210 "bdev_lvol_set_parent_bdev", 00:07:51.210 "bdev_lvol_set_parent", 00:07:51.210 "bdev_lvol_check_shallow_copy", 00:07:51.210 "bdev_lvol_start_shallow_copy", 00:07:51.210 "bdev_lvol_grow_lvstore", 00:07:51.210 "bdev_lvol_get_lvols", 00:07:51.210 "bdev_lvol_get_lvstores", 00:07:51.210 "bdev_lvol_delete", 00:07:51.210 "bdev_lvol_set_read_only", 00:07:51.211 "bdev_lvol_resize", 00:07:51.211 "bdev_lvol_decouple_parent", 00:07:51.211 "bdev_lvol_inflate", 00:07:51.211 "bdev_lvol_rename", 00:07:51.211 "bdev_lvol_clone_bdev", 00:07:51.211 "bdev_lvol_clone", 00:07:51.211 "bdev_lvol_snapshot", 00:07:51.211 "bdev_lvol_create", 00:07:51.211 "bdev_lvol_delete_lvstore", 00:07:51.211 "bdev_lvol_rename_lvstore", 00:07:51.211 "bdev_lvol_create_lvstore", 00:07:51.211 "bdev_raid_set_options", 00:07:51.211 "bdev_raid_remove_base_bdev", 00:07:51.211 "bdev_raid_add_base_bdev", 00:07:51.211 "bdev_raid_delete", 00:07:51.211 "bdev_raid_create", 00:07:51.211 "bdev_raid_get_bdevs", 00:07:51.211 "bdev_error_inject_error", 00:07:51.211 "bdev_error_delete", 00:07:51.211 "bdev_error_create", 00:07:51.211 "bdev_split_delete", 00:07:51.211 "bdev_split_create", 00:07:51.211 "bdev_delay_delete", 00:07:51.211 "bdev_delay_create", 00:07:51.211 "bdev_delay_update_latency", 00:07:51.211 "bdev_zone_block_delete", 00:07:51.211 "bdev_zone_block_create", 00:07:51.211 "blobfs_create", 00:07:51.211 "blobfs_detect", 00:07:51.211 "blobfs_set_cache_size", 00:07:51.211 "bdev_aio_delete", 00:07:51.211 "bdev_aio_rescan", 00:07:51.211 "bdev_aio_create", 00:07:51.211 "bdev_ftl_set_property", 00:07:51.211 "bdev_ftl_get_properties", 00:07:51.211 "bdev_ftl_get_stats", 00:07:51.211 "bdev_ftl_unmap", 00:07:51.211 "bdev_ftl_unload", 00:07:51.211 "bdev_ftl_delete", 00:07:51.211 "bdev_ftl_load", 00:07:51.211 "bdev_ftl_create", 00:07:51.211 "bdev_virtio_attach_controller", 00:07:51.211 "bdev_virtio_scsi_get_devices", 00:07:51.211 "bdev_virtio_detach_controller", 00:07:51.211 "bdev_virtio_blk_set_hotplug", 00:07:51.211 "bdev_iscsi_delete", 00:07:51.211 "bdev_iscsi_create", 00:07:51.211 "bdev_iscsi_set_options", 00:07:51.211 "bdev_uring_delete", 00:07:51.211 "bdev_uring_rescan", 00:07:51.211 "bdev_uring_create", 00:07:51.211 "accel_error_inject_error", 00:07:51.211 "ioat_scan_accel_module", 00:07:51.211 "dsa_scan_accel_module", 00:07:51.211 "iaa_scan_accel_module", 00:07:51.211 "keyring_file_remove_key", 00:07:51.211 "keyring_file_add_key", 00:07:51.211 "keyring_linux_set_options", 00:07:51.211 "fsdev_aio_delete", 00:07:51.211 "fsdev_aio_create", 00:07:51.211 "iscsi_get_histogram", 00:07:51.211 "iscsi_enable_histogram", 00:07:51.211 "iscsi_set_options", 00:07:51.211 "iscsi_get_auth_groups", 00:07:51.211 "iscsi_auth_group_remove_secret", 00:07:51.211 "iscsi_auth_group_add_secret", 00:07:51.211 "iscsi_delete_auth_group", 00:07:51.211 "iscsi_create_auth_group", 00:07:51.211 "iscsi_set_discovery_auth", 00:07:51.211 "iscsi_get_options", 00:07:51.211 "iscsi_target_node_request_logout", 00:07:51.211 "iscsi_target_node_set_redirect", 00:07:51.211 "iscsi_target_node_set_auth", 00:07:51.211 "iscsi_target_node_add_lun", 00:07:51.211 "iscsi_get_stats", 00:07:51.211 "iscsi_get_connections", 00:07:51.211 "iscsi_portal_group_set_auth", 00:07:51.211 "iscsi_start_portal_group", 00:07:51.211 "iscsi_delete_portal_group", 00:07:51.211 "iscsi_create_portal_group", 00:07:51.211 "iscsi_get_portal_groups", 00:07:51.211 "iscsi_delete_target_node", 00:07:51.211 "iscsi_target_node_remove_pg_ig_maps", 00:07:51.211 "iscsi_target_node_add_pg_ig_maps", 00:07:51.211 "iscsi_create_target_node", 00:07:51.211 "iscsi_get_target_nodes", 00:07:51.211 "iscsi_delete_initiator_group", 00:07:51.211 "iscsi_initiator_group_remove_initiators", 00:07:51.211 "iscsi_initiator_group_add_initiators", 00:07:51.211 "iscsi_create_initiator_group", 00:07:51.211 "iscsi_get_initiator_groups", 00:07:51.211 "nvmf_set_crdt", 00:07:51.211 "nvmf_set_config", 00:07:51.211 "nvmf_set_max_subsystems", 00:07:51.211 "nvmf_stop_mdns_prr", 00:07:51.211 "nvmf_publish_mdns_prr", 00:07:51.211 "nvmf_subsystem_get_listeners", 00:07:51.211 "nvmf_subsystem_get_qpairs", 00:07:51.211 "nvmf_subsystem_get_controllers", 00:07:51.211 "nvmf_get_stats", 00:07:51.211 "nvmf_get_transports", 00:07:51.211 "nvmf_create_transport", 00:07:51.211 "nvmf_get_targets", 00:07:51.211 "nvmf_delete_target", 00:07:51.211 "nvmf_create_target", 00:07:51.211 "nvmf_subsystem_allow_any_host", 00:07:51.211 "nvmf_subsystem_set_keys", 00:07:51.211 "nvmf_subsystem_remove_host", 00:07:51.211 "nvmf_subsystem_add_host", 00:07:51.211 "nvmf_ns_remove_host", 00:07:51.211 "nvmf_ns_add_host", 00:07:51.211 "nvmf_subsystem_remove_ns", 00:07:51.211 "nvmf_subsystem_set_ns_ana_group", 00:07:51.211 "nvmf_subsystem_add_ns", 00:07:51.211 "nvmf_subsystem_listener_set_ana_state", 00:07:51.211 "nvmf_discovery_get_referrals", 00:07:51.211 "nvmf_discovery_remove_referral", 00:07:51.211 "nvmf_discovery_add_referral", 00:07:51.211 "nvmf_subsystem_remove_listener", 00:07:51.211 "nvmf_subsystem_add_listener", 00:07:51.211 "nvmf_delete_subsystem", 00:07:51.211 "nvmf_create_subsystem", 00:07:51.211 "nvmf_get_subsystems", 00:07:51.211 "env_dpdk_get_mem_stats", 00:07:51.211 "nbd_get_disks", 00:07:51.211 "nbd_stop_disk", 00:07:51.211 "nbd_start_disk", 00:07:51.211 "ublk_recover_disk", 00:07:51.211 "ublk_get_disks", 00:07:51.211 "ublk_stop_disk", 00:07:51.211 "ublk_start_disk", 00:07:51.211 "ublk_destroy_target", 00:07:51.211 "ublk_create_target", 00:07:51.211 "virtio_blk_create_transport", 00:07:51.211 "virtio_blk_get_transports", 00:07:51.211 "vhost_controller_set_coalescing", 00:07:51.211 "vhost_get_controllers", 00:07:51.211 "vhost_delete_controller", 00:07:51.211 "vhost_create_blk_controller", 00:07:51.211 "vhost_scsi_controller_remove_target", 00:07:51.211 "vhost_scsi_controller_add_target", 00:07:51.211 "vhost_start_scsi_controller", 00:07:51.211 "vhost_create_scsi_controller", 00:07:51.211 "thread_set_cpumask", 00:07:51.211 "scheduler_set_options", 00:07:51.211 "framework_get_governor", 00:07:51.211 "framework_get_scheduler", 00:07:51.211 "framework_set_scheduler", 00:07:51.211 "framework_get_reactors", 00:07:51.211 "thread_get_io_channels", 00:07:51.211 "thread_get_pollers", 00:07:51.211 "thread_get_stats", 00:07:51.211 "framework_monitor_context_switch", 00:07:51.211 "spdk_kill_instance", 00:07:51.211 "log_enable_timestamps", 00:07:51.211 "log_get_flags", 00:07:51.211 "log_clear_flag", 00:07:51.211 "log_set_flag", 00:07:51.211 "log_get_level", 00:07:51.211 "log_set_level", 00:07:51.211 "log_get_print_level", 00:07:51.211 "log_set_print_level", 00:07:51.211 "framework_enable_cpumask_locks", 00:07:51.211 "framework_disable_cpumask_locks", 00:07:51.211 "framework_wait_init", 00:07:51.211 "framework_start_init", 00:07:51.211 "scsi_get_devices", 00:07:51.211 "bdev_get_histogram", 00:07:51.211 "bdev_enable_histogram", 00:07:51.211 "bdev_set_qos_limit", 00:07:51.211 "bdev_set_qd_sampling_period", 00:07:51.211 "bdev_get_bdevs", 00:07:51.211 "bdev_reset_iostat", 00:07:51.211 "bdev_get_iostat", 00:07:51.211 "bdev_examine", 00:07:51.211 "bdev_wait_for_examine", 00:07:51.211 "bdev_set_options", 00:07:51.211 "accel_get_stats", 00:07:51.211 "accel_set_options", 00:07:51.211 "accel_set_driver", 00:07:51.211 "accel_crypto_key_destroy", 00:07:51.211 "accel_crypto_keys_get", 00:07:51.211 "accel_crypto_key_create", 00:07:51.211 "accel_assign_opc", 00:07:51.211 "accel_get_module_info", 00:07:51.211 "accel_get_opc_assignments", 00:07:51.211 "vmd_rescan", 00:07:51.211 "vmd_remove_device", 00:07:51.211 "vmd_enable", 00:07:51.211 "sock_get_default_impl", 00:07:51.211 "sock_set_default_impl", 00:07:51.211 "sock_impl_set_options", 00:07:51.211 "sock_impl_get_options", 00:07:51.211 "iobuf_get_stats", 00:07:51.211 "iobuf_set_options", 00:07:51.211 "keyring_get_keys", 00:07:51.211 "framework_get_pci_devices", 00:07:51.211 "framework_get_config", 00:07:51.211 "framework_get_subsystems", 00:07:51.211 "fsdev_set_opts", 00:07:51.211 "fsdev_get_opts", 00:07:51.211 "trace_get_info", 00:07:51.211 "trace_get_tpoint_group_mask", 00:07:51.211 "trace_disable_tpoint_group", 00:07:51.211 "trace_enable_tpoint_group", 00:07:51.211 "trace_clear_tpoint_mask", 00:07:51.211 "trace_set_tpoint_mask", 00:07:51.211 "notify_get_notifications", 00:07:51.211 "notify_get_types", 00:07:51.211 "spdk_get_version", 00:07:51.211 "rpc_get_methods" 00:07:51.211 ] 00:07:51.470 17:08:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.470 17:08:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:51.470 17:08:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57817 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57817 ']' 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57817 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57817 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.470 killing process with pid 57817 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57817' 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57817 00:07:51.470 17:08:52 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57817 00:07:51.729 ************************************ 00:07:51.729 END TEST spdkcli_tcp 00:07:51.729 ************************************ 00:07:51.729 00:07:51.729 real 0m1.998s 00:07:51.729 user 0m3.747s 00:07:51.729 sys 0m0.523s 00:07:51.729 17:08:52 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.729 17:08:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.729 17:08:52 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.729 17:08:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.729 17:08:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.729 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:07:51.729 ************************************ 00:07:51.729 START TEST dpdk_mem_utility 00:07:51.729 ************************************ 00:07:51.729 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.988 * Looking for test storage... 00:07:51.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.988 17:08:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.988 --rc genhtml_branch_coverage=1 00:07:51.988 --rc genhtml_function_coverage=1 00:07:51.988 --rc genhtml_legend=1 00:07:51.988 --rc geninfo_all_blocks=1 00:07:51.988 --rc geninfo_unexecuted_blocks=1 00:07:51.988 00:07:51.988 ' 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.988 --rc genhtml_branch_coverage=1 00:07:51.988 --rc genhtml_function_coverage=1 00:07:51.988 --rc genhtml_legend=1 00:07:51.988 --rc geninfo_all_blocks=1 00:07:51.988 --rc geninfo_unexecuted_blocks=1 00:07:51.988 00:07:51.988 ' 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.988 --rc genhtml_branch_coverage=1 00:07:51.988 --rc genhtml_function_coverage=1 00:07:51.988 --rc genhtml_legend=1 00:07:51.988 --rc geninfo_all_blocks=1 00:07:51.988 --rc geninfo_unexecuted_blocks=1 00:07:51.988 00:07:51.988 ' 00:07:51.988 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.988 --rc genhtml_branch_coverage=1 00:07:51.988 --rc genhtml_function_coverage=1 00:07:51.988 --rc genhtml_legend=1 00:07:51.988 --rc geninfo_all_blocks=1 00:07:51.988 --rc geninfo_unexecuted_blocks=1 00:07:51.988 00:07:51.988 ' 00:07:51.988 17:08:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:51.988 17:08:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57916 00:07:51.988 17:08:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57916 00:07:51.989 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57916 ']' 00:07:51.989 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.989 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.989 17:08:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:51.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.989 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.989 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.989 17:08:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:51.989 [2024-11-04 17:08:52.745606] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:51.989 [2024-11-04 17:08:52.745717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57916 ] 00:07:52.247 [2024-11-04 17:08:52.889922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.247 [2024-11-04 17:08:52.948904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.247 [2024-11-04 17:08:53.017065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.506 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.506 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:07:52.506 17:08:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:52.506 17:08:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:52.506 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.506 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:52.506 { 00:07:52.506 "filename": "/tmp/spdk_mem_dump.txt" 00:07:52.506 } 00:07:52.506 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.506 17:08:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:52.506 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:52.506 1 heaps totaling size 810.000000 MiB 00:07:52.506 size: 810.000000 MiB heap id: 0 00:07:52.506 end heaps---------- 00:07:52.506 9 mempools totaling size 595.772034 MiB 00:07:52.506 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:52.506 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:52.506 size: 92.545471 MiB name: bdev_io_57916 00:07:52.506 size: 50.003479 MiB name: msgpool_57916 00:07:52.506 size: 36.509338 MiB name: fsdev_io_57916 00:07:52.506 size: 21.763794 MiB name: PDU_Pool 00:07:52.506 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:52.506 size: 4.133484 MiB name: evtpool_57916 00:07:52.506 size: 0.026123 MiB name: Session_Pool 00:07:52.506 end mempools------- 00:07:52.506 6 memzones totaling size 4.142822 MiB 00:07:52.506 size: 1.000366 MiB name: RG_ring_0_57916 00:07:52.506 size: 1.000366 MiB name: RG_ring_1_57916 00:07:52.506 size: 1.000366 MiB name: RG_ring_4_57916 00:07:52.506 size: 1.000366 MiB name: RG_ring_5_57916 00:07:52.506 size: 0.125366 MiB name: RG_ring_2_57916 00:07:52.506 size: 0.015991 MiB name: RG_ring_3_57916 00:07:52.506 end memzones------- 00:07:52.506 17:08:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:52.767 heap id: 0 total size: 810.000000 MiB number of busy elements: 311 number of free elements: 15 00:07:52.767 list of free elements. size: 10.813599 MiB 00:07:52.767 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:52.767 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:52.767 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:52.767 element at address: 0x200000400000 with size: 0.993958 MiB 00:07:52.767 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:52.767 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:52.767 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:52.767 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:52.767 element at address: 0x20001a600000 with size: 0.567322 MiB 00:07:52.767 element at address: 0x20000a600000 with size: 0.488892 MiB 00:07:52.767 element at address: 0x200000c00000 with size: 0.487000 MiB 00:07:52.767 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:52.767 element at address: 0x200003e00000 with size: 0.480286 MiB 00:07:52.767 element at address: 0x200027a00000 with size: 0.396484 MiB 00:07:52.767 element at address: 0x200000800000 with size: 0.351746 MiB 00:07:52.767 list of standard malloc elements. size: 199.267517 MiB 00:07:52.767 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:52.767 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:52.767 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:52.767 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:52.767 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:52.767 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:52.767 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:52.767 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:52.767 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:52.767 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000085e580 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087e840 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087e900 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:52.767 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:52.767 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:52.768 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691480 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691540 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691600 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691780 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691840 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691900 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692080 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692140 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692200 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692380 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692440 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692500 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692680 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692740 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692800 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692980 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693040 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693100 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693280 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693340 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693400 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693580 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693640 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693700 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693880 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693940 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694000 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694180 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694240 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694300 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694480 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694540 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694600 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694780 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694840 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694900 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a695080 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a695140 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a695200 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:52.768 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a65800 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a658c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6c4c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:07:52.768 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:52.769 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:52.769 list of memzone associated elements. size: 599.918884 MiB 00:07:52.769 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:52.769 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:52.769 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:52.769 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:52.769 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:52.769 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57916_0 00:07:52.769 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:52.769 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57916_0 00:07:52.769 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:52.769 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57916_0 00:07:52.769 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:52.769 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:52.769 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:52.769 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:52.769 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:52.769 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57916_0 00:07:52.769 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:52.769 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57916 00:07:52.769 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:52.769 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57916 00:07:52.769 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:52.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:52.769 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:52.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:52.769 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:52.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:52.769 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:52.769 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:52.769 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:52.769 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57916 00:07:52.769 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:52.769 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57916 00:07:52.769 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:52.769 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57916 00:07:52.769 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:52.769 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57916 00:07:52.769 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:52.769 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57916 00:07:52.769 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:52.769 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57916 00:07:52.769 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:52.769 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:52.769 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:52.769 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:52.769 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:52.769 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:52.769 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:52.769 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57916 00:07:52.769 element at address: 0x20000085e640 with size: 0.125488 MiB 00:07:52.769 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57916 00:07:52.769 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:52.769 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:52.769 element at address: 0x200027a65980 with size: 0.023743 MiB 00:07:52.769 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:52.769 element at address: 0x20000085a380 with size: 0.016113 MiB 00:07:52.769 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57916 00:07:52.769 element at address: 0x200027a6bac0 with size: 0.002441 MiB 00:07:52.769 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:52.769 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:07:52.769 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57916 00:07:52.769 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:52.769 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57916 00:07:52.769 element at address: 0x20000085a180 with size: 0.000305 MiB 00:07:52.769 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57916 00:07:52.769 element at address: 0x200027a6c580 with size: 0.000305 MiB 00:07:52.769 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:52.769 17:08:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:52.769 17:08:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57916 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57916 ']' 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57916 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57916 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57916' 00:07:52.769 killing process with pid 57916 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57916 00:07:52.769 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57916 00:07:53.029 00:07:53.029 real 0m1.271s 00:07:53.029 user 0m1.223s 00:07:53.029 sys 0m0.428s 00:07:53.029 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.029 17:08:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.029 ************************************ 00:07:53.029 END TEST dpdk_mem_utility 00:07:53.029 ************************************ 00:07:53.289 17:08:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:53.289 17:08:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:53.289 17:08:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.289 17:08:53 -- common/autotest_common.sh@10 -- # set +x 00:07:53.289 ************************************ 00:07:53.289 START TEST event 00:07:53.289 ************************************ 00:07:53.289 17:08:53 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:53.289 * Looking for test storage... 00:07:53.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:53.289 17:08:53 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:53.289 17:08:53 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:53.289 17:08:53 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:53.289 17:08:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.289 17:08:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.289 17:08:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.289 17:08:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.289 17:08:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.289 17:08:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.289 17:08:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.289 17:08:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.289 17:08:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.289 17:08:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.289 17:08:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.289 17:08:54 event -- scripts/common.sh@344 -- # case "$op" in 00:07:53.289 17:08:54 event -- scripts/common.sh@345 -- # : 1 00:07:53.289 17:08:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.289 17:08:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.289 17:08:54 event -- scripts/common.sh@365 -- # decimal 1 00:07:53.289 17:08:54 event -- scripts/common.sh@353 -- # local d=1 00:07:53.289 17:08:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.289 17:08:54 event -- scripts/common.sh@355 -- # echo 1 00:07:53.289 17:08:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.289 17:08:54 event -- scripts/common.sh@366 -- # decimal 2 00:07:53.289 17:08:54 event -- scripts/common.sh@353 -- # local d=2 00:07:53.289 17:08:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.289 17:08:54 event -- scripts/common.sh@355 -- # echo 2 00:07:53.289 17:08:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.289 17:08:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.289 17:08:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.289 17:08:54 event -- scripts/common.sh@368 -- # return 0 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:53.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.289 --rc genhtml_branch_coverage=1 00:07:53.289 --rc genhtml_function_coverage=1 00:07:53.289 --rc genhtml_legend=1 00:07:53.289 --rc geninfo_all_blocks=1 00:07:53.289 --rc geninfo_unexecuted_blocks=1 00:07:53.289 00:07:53.289 ' 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:53.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.289 --rc genhtml_branch_coverage=1 00:07:53.289 --rc genhtml_function_coverage=1 00:07:53.289 --rc genhtml_legend=1 00:07:53.289 --rc geninfo_all_blocks=1 00:07:53.289 --rc geninfo_unexecuted_blocks=1 00:07:53.289 00:07:53.289 ' 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:53.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.289 --rc genhtml_branch_coverage=1 00:07:53.289 --rc genhtml_function_coverage=1 00:07:53.289 --rc genhtml_legend=1 00:07:53.289 --rc geninfo_all_blocks=1 00:07:53.289 --rc geninfo_unexecuted_blocks=1 00:07:53.289 00:07:53.289 ' 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:53.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.289 --rc genhtml_branch_coverage=1 00:07:53.289 --rc genhtml_function_coverage=1 00:07:53.289 --rc genhtml_legend=1 00:07:53.289 --rc geninfo_all_blocks=1 00:07:53.289 --rc geninfo_unexecuted_blocks=1 00:07:53.289 00:07:53.289 ' 00:07:53.289 17:08:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:53.289 17:08:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:53.289 17:08:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:53.289 17:08:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.289 17:08:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:53.289 ************************************ 00:07:53.289 START TEST event_perf 00:07:53.289 ************************************ 00:07:53.289 17:08:54 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:53.289 Running I/O for 1 seconds...[2024-11-04 17:08:54.076740] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:53.289 [2024-11-04 17:08:54.076847] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57988 ] 00:07:53.548 [2024-11-04 17:08:54.227397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.548 [2024-11-04 17:08:54.293699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.548 Running I/O for 1 seconds...[2024-11-04 17:08:54.293854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.548 [2024-11-04 17:08:54.293947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.548 [2024-11-04 17:08:54.294175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.925 00:07:54.925 lcore 0: 200026 00:07:54.925 lcore 1: 200025 00:07:54.925 lcore 2: 200025 00:07:54.925 lcore 3: 200027 00:07:54.925 done. 00:07:54.925 00:07:54.925 real 0m1.292s 00:07:54.925 user 0m4.120s 00:07:54.925 sys 0m0.055s 00:07:54.925 17:08:55 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.925 ************************************ 00:07:54.925 END TEST event_perf 00:07:54.925 ************************************ 00:07:54.925 17:08:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.925 17:08:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:54.925 17:08:55 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:54.925 17:08:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.925 17:08:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.925 ************************************ 00:07:54.925 START TEST event_reactor 00:07:54.925 ************************************ 00:07:54.925 17:08:55 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:54.925 [2024-11-04 17:08:55.417347] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:54.925 [2024-11-04 17:08:55.417912] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58025 ] 00:07:54.925 [2024-11-04 17:08:55.556797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.925 [2024-11-04 17:08:55.612029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.860 test_start 00:07:55.860 oneshot 00:07:55.860 tick 100 00:07:55.860 tick 100 00:07:55.860 tick 250 00:07:55.860 tick 100 00:07:55.860 tick 100 00:07:55.860 tick 100 00:07:55.860 tick 250 00:07:55.860 tick 500 00:07:55.860 tick 100 00:07:55.860 tick 100 00:07:55.860 tick 250 00:07:55.860 tick 100 00:07:55.860 tick 100 00:07:55.860 test_end 00:07:55.860 00:07:55.860 real 0m1.260s 00:07:55.860 user 0m1.117s 00:07:55.860 sys 0m0.037s 00:07:55.860 17:08:56 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.860 17:08:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:55.860 ************************************ 00:07:55.860 END TEST event_reactor 00:07:55.860 ************************************ 00:07:56.118 17:08:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:56.118 17:08:56 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:56.118 17:08:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.118 17:08:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.118 ************************************ 00:07:56.118 START TEST event_reactor_perf 00:07:56.118 ************************************ 00:07:56.118 17:08:56 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:56.118 [2024-11-04 17:08:56.728718] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:56.118 [2024-11-04 17:08:56.729418] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58062 ] 00:07:56.118 [2024-11-04 17:08:56.869462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.376 [2024-11-04 17:08:56.924488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.312 test_start 00:07:57.312 test_end 00:07:57.312 Performance: 396932 events per second 00:07:57.312 00:07:57.312 real 0m1.256s 00:07:57.312 user 0m1.109s 00:07:57.312 sys 0m0.042s 00:07:57.312 17:08:57 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.312 17:08:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.312 ************************************ 00:07:57.312 END TEST event_reactor_perf 00:07:57.313 ************************************ 00:07:57.313 17:08:58 event -- event/event.sh@49 -- # uname -s 00:07:57.313 17:08:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:57.313 17:08:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:57.313 17:08:58 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:57.313 17:08:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.313 17:08:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.313 ************************************ 00:07:57.313 START TEST event_scheduler 00:07:57.313 ************************************ 00:07:57.313 17:08:58 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:57.313 * Looking for test storage... 00:07:57.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:57.313 17:08:58 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:57.313 17:08:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:57.313 17:08:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:57.572 17:08:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.572 17:08:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:57.572 17:08:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.572 17:08:58 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:57.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.572 --rc genhtml_branch_coverage=1 00:07:57.572 --rc genhtml_function_coverage=1 00:07:57.572 --rc genhtml_legend=1 00:07:57.572 --rc geninfo_all_blocks=1 00:07:57.572 --rc geninfo_unexecuted_blocks=1 00:07:57.572 00:07:57.572 ' 00:07:57.572 17:08:58 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:57.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.572 --rc genhtml_branch_coverage=1 00:07:57.572 --rc genhtml_function_coverage=1 00:07:57.572 --rc genhtml_legend=1 00:07:57.572 --rc geninfo_all_blocks=1 00:07:57.572 --rc geninfo_unexecuted_blocks=1 00:07:57.572 00:07:57.572 ' 00:07:57.572 17:08:58 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:57.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.572 --rc genhtml_branch_coverage=1 00:07:57.572 --rc genhtml_function_coverage=1 00:07:57.572 --rc genhtml_legend=1 00:07:57.572 --rc geninfo_all_blocks=1 00:07:57.572 --rc geninfo_unexecuted_blocks=1 00:07:57.572 00:07:57.572 ' 00:07:57.572 17:08:58 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:57.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.572 --rc genhtml_branch_coverage=1 00:07:57.572 --rc genhtml_function_coverage=1 00:07:57.572 --rc genhtml_legend=1 00:07:57.572 --rc geninfo_all_blocks=1 00:07:57.572 --rc geninfo_unexecuted_blocks=1 00:07:57.572 00:07:57.572 ' 00:07:57.572 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:57.572 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58126 00:07:57.572 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.572 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:57.572 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58126 00:07:57.572 17:08:58 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58126 ']' 00:07:57.573 17:08:58 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.573 17:08:58 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.573 17:08:58 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.573 17:08:58 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.573 17:08:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.573 [2024-11-04 17:08:58.270617] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:07:57.573 [2024-11-04 17:08:58.271221] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58126 ] 00:07:57.832 [2024-11-04 17:08:58.415247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.832 [2024-11-04 17:08:58.481288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.832 [2024-11-04 17:08:58.481424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.832 [2024-11-04 17:08:58.481517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.832 [2024-11-04 17:08:58.481520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:07:57.832 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.832 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:57.832 POWER: Cannot set governor of lcore 0 to userspace 00:07:57.832 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:57.832 POWER: Cannot set governor of lcore 0 to performance 00:07:57.832 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:57.832 POWER: Cannot set governor of lcore 0 to userspace 00:07:57.832 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:57.832 POWER: Cannot set governor of lcore 0 to userspace 00:07:57.832 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:57.832 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:57.832 POWER: Unable to set Power Management Environment for lcore 0 00:07:57.832 [2024-11-04 17:08:58.524523] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:57.832 [2024-11-04 17:08:58.524684] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:57.832 [2024-11-04 17:08:58.524787] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:57.832 [2024-11-04 17:08:58.524897] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:57.832 [2024-11-04 17:08:58.524941] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:57.832 [2024-11-04 17:08:58.525010] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.832 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.832 [2024-11-04 17:08:58.584897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.832 [2024-11-04 17:08:58.619533] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.832 17:08:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.832 17:08:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:57.832 ************************************ 00:07:57.832 START TEST scheduler_create_thread 00:07:57.832 ************************************ 00:07:57.832 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:07:57.832 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:57.832 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.832 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 2 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 3 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 4 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.091 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.091 5 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 6 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 7 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 8 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 9 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 10 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.092 17:08:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.660 17:08:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.660 17:08:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:58.660 17:08:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:58.660 17:08:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.660 17:08:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.597 ************************************ 00:07:59.597 END TEST scheduler_create_thread 00:07:59.597 ************************************ 00:07:59.597 17:09:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.597 00:07:59.597 real 0m1.755s 00:07:59.597 user 0m0.014s 00:07:59.597 sys 0m0.006s 00:07:59.597 17:09:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.597 17:09:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.856 17:09:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:59.856 17:09:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58126 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58126 ']' 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58126 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58126 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:59.856 killing process with pid 58126 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58126' 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58126 00:07:59.856 17:09:00 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58126 00:08:00.116 [2024-11-04 17:09:00.862534] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:00.375 00:08:00.375 real 0m3.023s 00:08:00.375 user 0m3.706s 00:08:00.375 sys 0m0.361s 00:08:00.375 17:09:01 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.375 ************************************ 00:08:00.375 END TEST event_scheduler 00:08:00.375 ************************************ 00:08:00.375 17:09:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:00.375 17:09:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:00.375 17:09:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:00.375 17:09:01 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.375 17:09:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.375 17:09:01 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.375 ************************************ 00:08:00.375 START TEST app_repeat 00:08:00.375 ************************************ 00:08:00.375 17:09:01 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:00.375 Process app_repeat pid: 58207 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58207 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58207' 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:00.375 spdk_app_start Round 0 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:00.375 17:09:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58207 /var/tmp/spdk-nbd.sock 00:08:00.375 17:09:01 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58207 ']' 00:08:00.375 17:09:01 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.375 17:09:01 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.375 17:09:01 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.375 17:09:01 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.375 17:09:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.375 [2024-11-04 17:09:01.143409] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:00.375 [2024-11-04 17:09:01.143500] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58207 ] 00:08:00.634 [2024-11-04 17:09:01.287182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.634 [2024-11-04 17:09:01.333966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.634 [2024-11-04 17:09:01.333976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.635 [2024-11-04 17:09:01.391292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.571 17:09:02 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.571 17:09:02 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:01.571 17:09:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:01.571 Malloc0 00:08:01.830 17:09:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.089 Malloc1 00:08:02.089 17:09:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.089 17:09:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:02.348 /dev/nbd0 00:08:02.348 17:09:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:02.348 17:09:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.348 1+0 records in 00:08:02.348 1+0 records out 00:08:02.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189544 s, 21.6 MB/s 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:02.348 17:09:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:02.348 17:09:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.348 17:09:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.349 17:09:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:02.608 /dev/nbd1 00:08:02.608 17:09:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:02.608 17:09:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.608 1+0 records in 00:08:02.608 1+0 records out 00:08:02.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385079 s, 10.6 MB/s 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:02.608 17:09:03 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:02.608 17:09:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.608 17:09:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.608 17:09:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.608 17:09:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.608 17:09:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.866 17:09:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:02.866 { 00:08:02.866 "nbd_device": "/dev/nbd0", 00:08:02.866 "bdev_name": "Malloc0" 00:08:02.866 }, 00:08:02.866 { 00:08:02.866 "nbd_device": "/dev/nbd1", 00:08:02.866 "bdev_name": "Malloc1" 00:08:02.866 } 00:08:02.866 ]' 00:08:02.866 17:09:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:02.866 { 00:08:02.867 "nbd_device": "/dev/nbd0", 00:08:02.867 "bdev_name": "Malloc0" 00:08:02.867 }, 00:08:02.867 { 00:08:02.867 "nbd_device": "/dev/nbd1", 00:08:02.867 "bdev_name": "Malloc1" 00:08:02.867 } 00:08:02.867 ]' 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:02.867 /dev/nbd1' 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:02.867 /dev/nbd1' 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:02.867 256+0 records in 00:08:02.867 256+0 records out 00:08:02.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107785 s, 97.3 MB/s 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.867 17:09:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.125 256+0 records in 00:08:03.125 256+0 records out 00:08:03.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225734 s, 46.5 MB/s 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:03.125 256+0 records in 00:08:03.125 256+0 records out 00:08:03.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025082 s, 41.8 MB/s 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.125 17:09:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.384 17:09:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.650 17:09:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:03.928 17:09:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:03.928 17:09:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:04.187 17:09:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:04.446 [2024-11-04 17:09:05.117399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.446 [2024-11-04 17:09:05.166015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.446 [2024-11-04 17:09:05.166027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.446 [2024-11-04 17:09:05.220571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.446 [2024-11-04 17:09:05.220683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:04.446 [2024-11-04 17:09:05.220697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:07.734 17:09:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:07.734 spdk_app_start Round 1 00:08:07.734 17:09:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:07.734 17:09:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58207 /var/tmp/spdk-nbd.sock 00:08:07.734 17:09:07 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58207 ']' 00:08:07.734 17:09:07 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:07.734 17:09:07 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:07.734 17:09:07 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:07.734 17:09:07 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.734 17:09:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.734 17:09:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.734 17:09:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:07.734 17:09:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:07.993 Malloc0 00:08:07.993 17:09:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:08.252 Malloc1 00:08:08.252 17:09:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.252 17:09:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:08.512 /dev/nbd0 00:08:08.512 17:09:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.512 17:09:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:08.512 1+0 records in 00:08:08.512 1+0 records out 00:08:08.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129099 s, 3.2 MB/s 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:08.512 17:09:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:08.512 17:09:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.512 17:09:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.512 17:09:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:08.771 /dev/nbd1 00:08:08.771 17:09:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.771 17:09:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:08.771 1+0 records in 00:08:08.771 1+0 records out 00:08:08.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032495 s, 12.6 MB/s 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:08.771 17:09:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:08.771 17:09:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.771 17:09:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:08.771 17:09:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.771 17:09:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.771 17:09:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:09.030 { 00:08:09.030 "nbd_device": "/dev/nbd0", 00:08:09.030 "bdev_name": "Malloc0" 00:08:09.030 }, 00:08:09.030 { 00:08:09.030 "nbd_device": "/dev/nbd1", 00:08:09.030 "bdev_name": "Malloc1" 00:08:09.030 } 00:08:09.030 ]' 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:09.030 { 00:08:09.030 "nbd_device": "/dev/nbd0", 00:08:09.030 "bdev_name": "Malloc0" 00:08:09.030 }, 00:08:09.030 { 00:08:09.030 "nbd_device": "/dev/nbd1", 00:08:09.030 "bdev_name": "Malloc1" 00:08:09.030 } 00:08:09.030 ]' 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.030 /dev/nbd1' 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.030 /dev/nbd1' 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:09.030 256+0 records in 00:08:09.030 256+0 records out 00:08:09.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739975 s, 142 MB/s 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.030 256+0 records in 00:08:09.030 256+0 records out 00:08:09.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277297 s, 37.8 MB/s 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.030 17:09:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:09.294 256+0 records in 00:08:09.294 256+0 records out 00:08:09.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279189 s, 37.6 MB/s 00:08:09.294 17:09:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:09.294 17:09:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.294 17:09:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.294 17:09:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.295 17:09:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.553 17:09:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.812 17:09:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:10.071 17:09:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:10.071 17:09:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:10.330 17:09:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:10.600 [2024-11-04 17:09:11.247854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.600 [2024-11-04 17:09:11.297483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.600 [2024-11-04 17:09:11.297494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.600 [2024-11-04 17:09:11.352448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.600 [2024-11-04 17:09:11.352539] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:10.600 [2024-11-04 17:09:11.352551] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:13.888 17:09:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:13.888 spdk_app_start Round 2 00:08:13.888 17:09:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:13.888 17:09:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58207 /var/tmp/spdk-nbd.sock 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58207 ']' 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:13.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.888 17:09:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:13.888 17:09:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:13.888 Malloc0 00:08:14.146 17:09:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:14.146 Malloc1 00:08:14.405 17:09:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:14.405 17:09:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:14.405 /dev/nbd0 00:08:14.691 17:09:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:14.691 17:09:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:14.691 1+0 records in 00:08:14.691 1+0 records out 00:08:14.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201563 s, 20.3 MB/s 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:14.691 17:09:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:14.691 17:09:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:14.691 17:09:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:14.691 17:09:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:14.955 /dev/nbd1 00:08:14.955 17:09:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:14.955 17:09:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:14.955 1+0 records in 00:08:14.955 1+0 records out 00:08:14.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312503 s, 13.1 MB/s 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:14.955 17:09:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:14.955 17:09:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:14.955 17:09:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:14.955 17:09:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.955 17:09:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.955 17:09:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:15.214 { 00:08:15.214 "nbd_device": "/dev/nbd0", 00:08:15.214 "bdev_name": "Malloc0" 00:08:15.214 }, 00:08:15.214 { 00:08:15.214 "nbd_device": "/dev/nbd1", 00:08:15.214 "bdev_name": "Malloc1" 00:08:15.214 } 00:08:15.214 ]' 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:15.214 { 00:08:15.214 "nbd_device": "/dev/nbd0", 00:08:15.214 "bdev_name": "Malloc0" 00:08:15.214 }, 00:08:15.214 { 00:08:15.214 "nbd_device": "/dev/nbd1", 00:08:15.214 "bdev_name": "Malloc1" 00:08:15.214 } 00:08:15.214 ]' 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:15.214 /dev/nbd1' 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:15.214 /dev/nbd1' 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:15.214 256+0 records in 00:08:15.214 256+0 records out 00:08:15.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106549 s, 98.4 MB/s 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:15.214 256+0 records in 00:08:15.214 256+0 records out 00:08:15.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269558 s, 38.9 MB/s 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:15.214 256+0 records in 00:08:15.214 256+0 records out 00:08:15.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250636 s, 41.8 MB/s 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:15.214 17:09:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.215 17:09:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.474 17:09:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.040 17:09:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:16.298 17:09:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:16.298 17:09:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:16.557 17:09:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:16.815 [2024-11-04 17:09:17.401968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.815 [2024-11-04 17:09:17.449777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.815 [2024-11-04 17:09:17.449788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.815 [2024-11-04 17:09:17.507978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.815 [2024-11-04 17:09:17.508097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:16.815 [2024-11-04 17:09:17.508111] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:20.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:20.129 17:09:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58207 /var/tmp/spdk-nbd.sock 00:08:20.129 17:09:20 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58207 ']' 00:08:20.129 17:09:20 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:20.129 17:09:20 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.129 17:09:20 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:20.130 17:09:20 event.app_repeat -- event/event.sh@39 -- # killprocess 58207 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58207 ']' 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58207 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58207 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.130 killing process with pid 58207 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58207' 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58207 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58207 00:08:20.130 spdk_app_start is called in Round 0. 00:08:20.130 Shutdown signal received, stop current app iteration 00:08:20.130 Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 reinitialization... 00:08:20.130 spdk_app_start is called in Round 1. 00:08:20.130 Shutdown signal received, stop current app iteration 00:08:20.130 Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 reinitialization... 00:08:20.130 spdk_app_start is called in Round 2. 00:08:20.130 Shutdown signal received, stop current app iteration 00:08:20.130 Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 reinitialization... 00:08:20.130 spdk_app_start is called in Round 3. 00:08:20.130 Shutdown signal received, stop current app iteration 00:08:20.130 17:09:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:20.130 17:09:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:20.130 00:08:20.130 real 0m19.655s 00:08:20.130 user 0m44.887s 00:08:20.130 sys 0m2.855s 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.130 17:09:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:20.130 ************************************ 00:08:20.130 END TEST app_repeat 00:08:20.130 ************************************ 00:08:20.130 17:09:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:20.130 17:09:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:20.130 17:09:20 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.130 17:09:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.130 17:09:20 event -- common/autotest_common.sh@10 -- # set +x 00:08:20.130 ************************************ 00:08:20.130 START TEST cpu_locks 00:08:20.130 ************************************ 00:08:20.130 17:09:20 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:20.130 * Looking for test storage... 00:08:20.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:20.130 17:09:20 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:20.130 17:09:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:20.130 17:09:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:20.390 17:09:20 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.390 17:09:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.391 17:09:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:20.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.391 --rc genhtml_branch_coverage=1 00:08:20.391 --rc genhtml_function_coverage=1 00:08:20.391 --rc genhtml_legend=1 00:08:20.391 --rc geninfo_all_blocks=1 00:08:20.391 --rc geninfo_unexecuted_blocks=1 00:08:20.391 00:08:20.391 ' 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:20.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.391 --rc genhtml_branch_coverage=1 00:08:20.391 --rc genhtml_function_coverage=1 00:08:20.391 --rc genhtml_legend=1 00:08:20.391 --rc geninfo_all_blocks=1 00:08:20.391 --rc geninfo_unexecuted_blocks=1 00:08:20.391 00:08:20.391 ' 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:20.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.391 --rc genhtml_branch_coverage=1 00:08:20.391 --rc genhtml_function_coverage=1 00:08:20.391 --rc genhtml_legend=1 00:08:20.391 --rc geninfo_all_blocks=1 00:08:20.391 --rc geninfo_unexecuted_blocks=1 00:08:20.391 00:08:20.391 ' 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:20.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.391 --rc genhtml_branch_coverage=1 00:08:20.391 --rc genhtml_function_coverage=1 00:08:20.391 --rc genhtml_legend=1 00:08:20.391 --rc geninfo_all_blocks=1 00:08:20.391 --rc geninfo_unexecuted_blocks=1 00:08:20.391 00:08:20.391 ' 00:08:20.391 17:09:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:20.391 17:09:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:20.391 17:09:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:20.391 17:09:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.391 17:09:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.391 ************************************ 00:08:20.391 START TEST default_locks 00:08:20.391 ************************************ 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58653 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58653 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58653 ']' 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.391 17:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.391 [2024-11-04 17:09:21.090934] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:20.391 [2024-11-04 17:09:21.091113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58653 ] 00:08:20.650 [2024-11-04 17:09:21.253192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.650 [2024-11-04 17:09:21.309405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.650 [2024-11-04 17:09:21.382487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.586 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.586 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:08:21.586 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58653 00:08:21.586 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:21.586 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58653 00:08:21.845 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58653 00:08:21.845 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58653 ']' 00:08:21.845 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58653 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58653 00:08:21.846 killing process with pid 58653 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58653' 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58653 00:08:21.846 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58653 00:08:22.104 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58653 00:08:22.104 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:22.104 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58653 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58653 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58653 ']' 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58653) - No such process 00:08:22.105 ERROR: process (pid: 58653) is no longer running 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:22.105 00:08:22.105 real 0m1.895s 00:08:22.105 user 0m2.056s 00:08:22.105 sys 0m0.583s 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.105 ************************************ 00:08:22.105 END TEST default_locks 00:08:22.105 17:09:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.105 ************************************ 00:08:22.364 17:09:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:22.364 17:09:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:22.364 17:09:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.364 17:09:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 ************************************ 00:08:22.364 START TEST default_locks_via_rpc 00:08:22.364 ************************************ 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58705 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58705 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58705 ']' 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.364 17:09:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.364 [2024-11-04 17:09:23.009166] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:22.364 [2024-11-04 17:09:23.009295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58705 ] 00:08:22.364 [2024-11-04 17:09:23.149426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.623 [2024-11-04 17:09:23.209497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.623 [2024-11-04 17:09:23.280940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58705 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58705 00:08:23.560 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58705 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58705 ']' 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58705 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58705 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:23.820 killing process with pid 58705 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58705' 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58705 00:08:23.820 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58705 00:08:24.388 00:08:24.388 real 0m1.967s 00:08:24.388 user 0m2.133s 00:08:24.388 sys 0m0.592s 00:08:24.388 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.388 17:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.388 ************************************ 00:08:24.388 END TEST default_locks_via_rpc 00:08:24.388 ************************************ 00:08:24.388 17:09:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:24.388 17:09:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:24.388 17:09:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.388 17:09:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.388 ************************************ 00:08:24.388 START TEST non_locking_app_on_locked_coremask 00:08:24.388 ************************************ 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58756 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58756 /var/tmp/spdk.sock 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58756 ']' 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.388 17:09:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.388 [2024-11-04 17:09:25.048661] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:24.388 [2024-11-04 17:09:25.048766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58756 ] 00:08:24.647 [2024-11-04 17:09:25.196020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.647 [2024-11-04 17:09:25.242154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.647 [2024-11-04 17:09:25.312444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58765 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58765 /var/tmp/spdk2.sock 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58765 ']' 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.906 17:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.906 [2024-11-04 17:09:25.582508] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:24.906 [2024-11-04 17:09:25.582616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58765 ] 00:08:25.165 [2024-11-04 17:09:25.744005] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:25.165 [2024-11-04 17:09:25.744066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.165 [2024-11-04 17:09:25.860935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.424 [2024-11-04 17:09:26.005458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.000 17:09:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.000 17:09:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:26.000 17:09:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58756 00:08:26.000 17:09:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:26.000 17:09:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58756 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58756 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58756 ']' 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58756 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58756 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.960 killing process with pid 58756 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58756' 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58756 00:08:26.960 17:09:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58756 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58765 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58765 ']' 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58765 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58765 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.527 killing process with pid 58765 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58765' 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58765 00:08:27.527 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58765 00:08:28.095 00:08:28.095 real 0m3.652s 00:08:28.095 user 0m4.008s 00:08:28.095 sys 0m1.101s 00:08:28.095 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.095 17:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.095 ************************************ 00:08:28.095 END TEST non_locking_app_on_locked_coremask 00:08:28.095 ************************************ 00:08:28.095 17:09:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:28.095 17:09:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:28.095 17:09:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.095 17:09:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:28.095 ************************************ 00:08:28.095 START TEST locking_app_on_unlocked_coremask 00:08:28.095 ************************************ 00:08:28.095 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:28.095 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58836 00:08:28.095 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58836 /var/tmp/spdk.sock 00:08:28.095 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:28.095 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58836 ']' 00:08:28.095 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.095 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.096 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.096 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.096 17:09:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.096 [2024-11-04 17:09:28.746754] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:28.096 [2024-11-04 17:09:28.747565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58836 ] 00:08:28.096 [2024-11-04 17:09:28.894681] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:28.096 [2024-11-04 17:09:28.894738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.354 [2024-11-04 17:09:28.951837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.354 [2024-11-04 17:09:29.027618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58853 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58853 /var/tmp/spdk2.sock 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58853 ']' 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.316 17:09:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:29.316 [2024-11-04 17:09:29.820277] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:29.316 [2024-11-04 17:09:29.820372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58853 ] 00:08:29.316 [2024-11-04 17:09:29.978427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.316 [2024-11-04 17:09:30.100325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.575 [2024-11-04 17:09:30.241481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.143 17:09:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:30.143 17:09:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:30.143 17:09:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58853 00:08:30.143 17:09:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58853 00:08:30.143 17:09:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58836 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58836 ']' 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58836 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58836 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:31.079 killing process with pid 58836 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58836' 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58836 00:08:31.079 17:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58836 00:08:31.648 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58853 00:08:31.648 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58853 ']' 00:08:31.648 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58853 00:08:31.648 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:31.648 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.648 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58853 00:08:31.907 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:31.907 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:31.907 killing process with pid 58853 00:08:31.907 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58853' 00:08:31.907 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58853 00:08:31.907 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58853 00:08:32.166 00:08:32.166 real 0m4.151s 00:08:32.166 user 0m4.677s 00:08:32.166 sys 0m1.139s 00:08:32.166 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.166 17:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:32.166 ************************************ 00:08:32.166 END TEST locking_app_on_unlocked_coremask 00:08:32.166 ************************************ 00:08:32.166 17:09:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:32.166 17:09:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:32.166 17:09:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.166 17:09:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:32.166 ************************************ 00:08:32.166 START TEST locking_app_on_locked_coremask 00:08:32.166 ************************************ 00:08:32.166 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:32.166 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58920 00:08:32.166 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58920 /var/tmp/spdk.sock 00:08:32.166 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:32.167 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58920 ']' 00:08:32.167 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.167 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.167 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.167 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.167 17:09:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:32.167 [2024-11-04 17:09:32.942860] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:32.167 [2024-11-04 17:09:32.942963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58920 ] 00:08:32.428 [2024-11-04 17:09:33.091517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.428 [2024-11-04 17:09:33.145798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.428 [2024-11-04 17:09:33.213746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58923 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58923 /var/tmp/spdk2.sock 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58923 /var/tmp/spdk2.sock 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58923 /var/tmp/spdk2.sock 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58923 ']' 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.689 17:09:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:32.689 [2024-11-04 17:09:33.485272] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:32.689 [2024-11-04 17:09:33.485376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58923 ] 00:08:32.948 [2024-11-04 17:09:33.641433] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58920 has claimed it. 00:08:32.948 [2024-11-04 17:09:33.641503] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:33.516 ERROR: process (pid: 58923) is no longer running 00:08:33.516 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58923) - No such process 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58920 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58920 00:08:33.516 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58920 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58920 ']' 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58920 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58920 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:34.084 killing process with pid 58920 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58920' 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58920 00:08:34.084 17:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58920 00:08:34.343 00:08:34.343 real 0m2.182s 00:08:34.343 user 0m2.476s 00:08:34.343 sys 0m0.597s 00:08:34.343 17:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.343 17:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:34.343 ************************************ 00:08:34.343 END TEST locking_app_on_locked_coremask 00:08:34.343 ************************************ 00:08:34.343 17:09:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:34.343 17:09:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:34.343 17:09:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.343 17:09:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.343 ************************************ 00:08:34.343 START TEST locking_overlapped_coremask 00:08:34.343 ************************************ 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58974 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58974 /var/tmp/spdk.sock 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58974 ']' 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.343 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:34.603 [2024-11-04 17:09:35.189835] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:34.603 [2024-11-04 17:09:35.189952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:08:34.603 [2024-11-04 17:09:35.337840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.603 [2024-11-04 17:09:35.393267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.603 [2024-11-04 17:09:35.393405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.603 [2024-11-04 17:09:35.393408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.862 [2024-11-04 17:09:35.462597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58985 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58985 /var/tmp/spdk2.sock 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58985 /var/tmp/spdk2.sock 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.862 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58985 /var/tmp/spdk2.sock 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58985 ']' 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.121 17:09:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.121 [2024-11-04 17:09:35.733556] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:35.121 [2024-11-04 17:09:35.733681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58985 ] 00:08:35.121 [2024-11-04 17:09:35.893691] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58974 has claimed it. 00:08:35.121 [2024-11-04 17:09:35.893745] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:35.690 ERROR: process (pid: 58985) is no longer running 00:08:35.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58985) - No such process 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58974 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58974 ']' 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58974 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:35.690 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58974 00:08:35.950 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:35.950 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:35.950 killing process with pid 58974 00:08:35.950 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58974' 00:08:35.950 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58974 00:08:35.950 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58974 00:08:36.209 00:08:36.209 real 0m1.756s 00:08:36.209 user 0m4.767s 00:08:36.209 sys 0m0.437s 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:36.209 ************************************ 00:08:36.209 END TEST locking_overlapped_coremask 00:08:36.209 ************************************ 00:08:36.209 17:09:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:36.209 17:09:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:36.209 17:09:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.209 17:09:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:36.209 ************************************ 00:08:36.209 START TEST locking_overlapped_coremask_via_rpc 00:08:36.209 ************************************ 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59025 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59025 /var/tmp/spdk.sock 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59025 ']' 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.209 17:09:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.209 [2024-11-04 17:09:37.000272] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:36.209 [2024-11-04 17:09:37.000367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59025 ] 00:08:36.469 [2024-11-04 17:09:37.147761] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:36.469 [2024-11-04 17:09:37.147841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.469 [2024-11-04 17:09:37.212309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.469 [2024-11-04 17:09:37.212467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.469 [2024-11-04 17:09:37.212471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.728 [2024-11-04 17:09:37.283827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59035 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59035 /var/tmp/spdk2.sock 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59035 ']' 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:36.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.728 17:09:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.987 [2024-11-04 17:09:37.549526] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:36.987 [2024-11-04 17:09:37.549642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:08:36.987 [2024-11-04 17:09:37.713005] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:36.987 [2024-11-04 17:09:37.713055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.258 [2024-11-04 17:09:37.835798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.258 [2024-11-04 17:09:37.839327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:37.258 [2024-11-04 17:09:37.839328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.258 [2024-11-04 17:09:37.980307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.837 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.837 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:37.837 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:37.837 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.838 [2024-11-04 17:09:38.589523] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59025 has claimed it. 00:08:37.838 request: 00:08:37.838 { 00:08:37.838 "method": "framework_enable_cpumask_locks", 00:08:37.838 "req_id": 1 00:08:37.838 } 00:08:37.838 Got JSON-RPC error response 00:08:37.838 response: 00:08:37.838 { 00:08:37.838 "code": -32603, 00:08:37.838 "message": "Failed to claim CPU core: 2" 00:08:37.838 } 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59025 /var/tmp/spdk.sock 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59025 ']' 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.838 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59035 /var/tmp/spdk2.sock 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59035 ']' 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:38.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:38.096 17:09:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:38.663 ************************************ 00:08:38.663 END TEST locking_overlapped_coremask_via_rpc 00:08:38.663 ************************************ 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:38.663 00:08:38.663 real 0m2.261s 00:08:38.663 user 0m1.280s 00:08:38.663 sys 0m0.185s 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.663 17:09:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.663 17:09:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:38.663 17:09:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59025 ]] 00:08:38.663 17:09:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59025 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59025 ']' 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59025 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59025 00:08:38.663 killing process with pid 59025 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59025' 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59025 00:08:38.663 17:09:39 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59025 00:08:38.922 17:09:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59035 ]] 00:08:38.922 17:09:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59035 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59035 ']' 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59035 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59035 00:08:38.922 killing process with pid 59035 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59035' 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59035 00:08:38.922 17:09:39 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59035 00:08:39.489 17:09:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:39.489 17:09:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:39.489 17:09:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59025 ]] 00:08:39.489 17:09:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59025 00:08:39.489 Process with pid 59025 is not found 00:08:39.489 Process with pid 59035 is not found 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59025 ']' 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59025 00:08:39.489 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59025) - No such process 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59025 is not found' 00:08:39.489 17:09:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59035 ]] 00:08:39.489 17:09:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59035 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59035 ']' 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59035 00:08:39.489 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59035) - No such process 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59035 is not found' 00:08:39.489 17:09:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:39.489 ************************************ 00:08:39.489 END TEST cpu_locks 00:08:39.489 ************************************ 00:08:39.489 00:08:39.489 real 0m19.432s 00:08:39.489 user 0m33.693s 00:08:39.489 sys 0m5.591s 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.489 17:09:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.748 ************************************ 00:08:39.748 END TEST event 00:08:39.748 ************************************ 00:08:39.748 00:08:39.748 real 0m46.443s 00:08:39.748 user 1m28.850s 00:08:39.748 sys 0m9.220s 00:08:39.748 17:09:40 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.748 17:09:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:39.748 17:09:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:39.748 17:09:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:39.748 17:09:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.748 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:08:39.748 ************************************ 00:08:39.748 START TEST thread 00:08:39.749 ************************************ 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:39.749 * Looking for test storage... 00:08:39.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.749 17:09:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.749 17:09:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.749 17:09:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.749 17:09:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.749 17:09:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.749 17:09:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.749 17:09:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.749 17:09:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.749 17:09:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.749 17:09:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.749 17:09:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.749 17:09:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:39.749 17:09:40 thread -- scripts/common.sh@345 -- # : 1 00:08:39.749 17:09:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.749 17:09:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.749 17:09:40 thread -- scripts/common.sh@365 -- # decimal 1 00:08:39.749 17:09:40 thread -- scripts/common.sh@353 -- # local d=1 00:08:39.749 17:09:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.749 17:09:40 thread -- scripts/common.sh@355 -- # echo 1 00:08:39.749 17:09:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.749 17:09:40 thread -- scripts/common.sh@366 -- # decimal 2 00:08:39.749 17:09:40 thread -- scripts/common.sh@353 -- # local d=2 00:08:39.749 17:09:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.749 17:09:40 thread -- scripts/common.sh@355 -- # echo 2 00:08:39.749 17:09:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.749 17:09:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.749 17:09:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.749 17:09:40 thread -- scripts/common.sh@368 -- # return 0 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.749 --rc genhtml_branch_coverage=1 00:08:39.749 --rc genhtml_function_coverage=1 00:08:39.749 --rc genhtml_legend=1 00:08:39.749 --rc geninfo_all_blocks=1 00:08:39.749 --rc geninfo_unexecuted_blocks=1 00:08:39.749 00:08:39.749 ' 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.749 --rc genhtml_branch_coverage=1 00:08:39.749 --rc genhtml_function_coverage=1 00:08:39.749 --rc genhtml_legend=1 00:08:39.749 --rc geninfo_all_blocks=1 00:08:39.749 --rc geninfo_unexecuted_blocks=1 00:08:39.749 00:08:39.749 ' 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.749 --rc genhtml_branch_coverage=1 00:08:39.749 --rc genhtml_function_coverage=1 00:08:39.749 --rc genhtml_legend=1 00:08:39.749 --rc geninfo_all_blocks=1 00:08:39.749 --rc geninfo_unexecuted_blocks=1 00:08:39.749 00:08:39.749 ' 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.749 --rc genhtml_branch_coverage=1 00:08:39.749 --rc genhtml_function_coverage=1 00:08:39.749 --rc genhtml_legend=1 00:08:39.749 --rc geninfo_all_blocks=1 00:08:39.749 --rc geninfo_unexecuted_blocks=1 00:08:39.749 00:08:39.749 ' 00:08:39.749 17:09:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.749 17:09:40 thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.749 ************************************ 00:08:39.749 START TEST thread_poller_perf 00:08:39.749 ************************************ 00:08:39.749 17:09:40 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:39.749 [2024-11-04 17:09:40.537740] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:39.749 [2024-11-04 17:09:40.538557] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:08:40.008 [2024-11-04 17:09:40.685058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.008 [2024-11-04 17:09:40.750695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.008 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:41.386 [2024-11-04T17:09:42.190Z] ====================================== 00:08:41.386 [2024-11-04T17:09:42.190Z] busy:2206855663 (cyc) 00:08:41.386 [2024-11-04T17:09:42.190Z] total_run_count: 346000 00:08:41.386 [2024-11-04T17:09:42.190Z] tsc_hz: 2200000000 (cyc) 00:08:41.386 [2024-11-04T17:09:42.190Z] ====================================== 00:08:41.386 [2024-11-04T17:09:42.190Z] poller_cost: 6378 (cyc), 2899 (nsec) 00:08:41.386 00:08:41.386 real 0m1.283s 00:08:41.386 user 0m1.128s 00:08:41.386 sys 0m0.047s 00:08:41.386 17:09:41 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.386 ************************************ 00:08:41.386 END TEST thread_poller_perf 00:08:41.386 ************************************ 00:08:41.386 17:09:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 17:09:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:41.386 17:09:41 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:41.386 17:09:41 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.386 17:09:41 thread -- common/autotest_common.sh@10 -- # set +x 00:08:41.386 ************************************ 00:08:41.386 START TEST thread_poller_perf 00:08:41.386 ************************************ 00:08:41.386 17:09:41 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:41.386 [2024-11-04 17:09:41.873844] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:41.386 [2024-11-04 17:09:41.873945] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:08:41.386 [2024-11-04 17:09:42.017546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.386 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:41.386 [2024-11-04 17:09:42.068837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.323 [2024-11-04T17:09:43.127Z] ====================================== 00:08:42.323 [2024-11-04T17:09:43.127Z] busy:2202319664 (cyc) 00:08:42.323 [2024-11-04T17:09:43.127Z] total_run_count: 4631000 00:08:42.323 [2024-11-04T17:09:43.127Z] tsc_hz: 2200000000 (cyc) 00:08:42.323 [2024-11-04T17:09:43.127Z] ====================================== 00:08:42.323 [2024-11-04T17:09:43.127Z] poller_cost: 475 (cyc), 215 (nsec) 00:08:42.323 ************************************ 00:08:42.323 END TEST thread_poller_perf 00:08:42.323 ************************************ 00:08:42.323 00:08:42.323 real 0m1.260s 00:08:42.323 user 0m1.109s 00:08:42.323 sys 0m0.044s 00:08:42.323 17:09:43 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.323 17:09:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:42.582 17:09:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:42.582 ************************************ 00:08:42.582 END TEST thread 00:08:42.582 ************************************ 00:08:42.582 00:08:42.582 real 0m2.824s 00:08:42.582 user 0m2.369s 00:08:42.582 sys 0m0.238s 00:08:42.582 17:09:43 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.582 17:09:43 thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.582 17:09:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:42.582 17:09:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:42.582 17:09:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:42.582 17:09:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.582 17:09:43 -- common/autotest_common.sh@10 -- # set +x 00:08:42.582 ************************************ 00:08:42.582 START TEST app_cmdline 00:08:42.582 ************************************ 00:08:42.582 17:09:43 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:42.582 * Looking for test storage... 00:08:42.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:42.582 17:09:43 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:42.582 17:09:43 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:42.582 17:09:43 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:42.855 17:09:43 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.855 17:09:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:42.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.856 17:09:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:42.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.856 --rc genhtml_branch_coverage=1 00:08:42.856 --rc genhtml_function_coverage=1 00:08:42.856 --rc genhtml_legend=1 00:08:42.856 --rc geninfo_all_blocks=1 00:08:42.856 --rc geninfo_unexecuted_blocks=1 00:08:42.856 00:08:42.856 ' 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:42.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.856 --rc genhtml_branch_coverage=1 00:08:42.856 --rc genhtml_function_coverage=1 00:08:42.856 --rc genhtml_legend=1 00:08:42.856 --rc geninfo_all_blocks=1 00:08:42.856 --rc geninfo_unexecuted_blocks=1 00:08:42.856 00:08:42.856 ' 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:42.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.856 --rc genhtml_branch_coverage=1 00:08:42.856 --rc genhtml_function_coverage=1 00:08:42.856 --rc genhtml_legend=1 00:08:42.856 --rc geninfo_all_blocks=1 00:08:42.856 --rc geninfo_unexecuted_blocks=1 00:08:42.856 00:08:42.856 ' 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:42.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.856 --rc genhtml_branch_coverage=1 00:08:42.856 --rc genhtml_function_coverage=1 00:08:42.856 --rc genhtml_legend=1 00:08:42.856 --rc geninfo_all_blocks=1 00:08:42.856 --rc geninfo_unexecuted_blocks=1 00:08:42.856 00:08:42.856 ' 00:08:42.856 17:09:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:42.856 17:09:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59289 00:08:42.856 17:09:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59289 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59289 ']' 00:08:42.856 17:09:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.856 17:09:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.856 [2024-11-04 17:09:43.478859] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:42.856 [2024-11-04 17:09:43.479202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59289 ] 00:08:42.856 [2024-11-04 17:09:43.622869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.115 [2024-11-04 17:09:43.684359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.115 [2024-11-04 17:09:43.763455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.375 17:09:43 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:43.375 17:09:43 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:43.375 17:09:43 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:43.633 { 00:08:43.633 "version": "SPDK v25.01-pre git sha1 16e58adb1", 00:08:43.633 "fields": { 00:08:43.633 "major": 25, 00:08:43.633 "minor": 1, 00:08:43.633 "patch": 0, 00:08:43.633 "suffix": "-pre", 00:08:43.633 "commit": "16e58adb1" 00:08:43.633 } 00:08:43.633 } 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:43.633 17:09:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:43.633 17:09:44 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:43.891 request: 00:08:43.891 { 00:08:43.891 "method": "env_dpdk_get_mem_stats", 00:08:43.891 "req_id": 1 00:08:43.891 } 00:08:43.891 Got JSON-RPC error response 00:08:43.891 response: 00:08:43.891 { 00:08:43.891 "code": -32601, 00:08:43.892 "message": "Method not found" 00:08:43.892 } 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.892 17:09:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59289 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59289 ']' 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59289 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59289 00:08:43.892 killing process with pid 59289 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59289' 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@971 -- # kill 59289 00:08:43.892 17:09:44 app_cmdline -- common/autotest_common.sh@976 -- # wait 59289 00:08:44.459 ************************************ 00:08:44.459 END TEST app_cmdline 00:08:44.459 ************************************ 00:08:44.459 00:08:44.459 real 0m1.814s 00:08:44.459 user 0m2.225s 00:08:44.459 sys 0m0.470s 00:08:44.459 17:09:45 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.459 17:09:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:44.459 17:09:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:44.459 17:09:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.459 17:09:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.459 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:08:44.459 ************************************ 00:08:44.459 START TEST version 00:08:44.459 ************************************ 00:08:44.459 17:09:45 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:44.459 * Looking for test storage... 00:08:44.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:44.459 17:09:45 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:44.459 17:09:45 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:44.459 17:09:45 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:44.459 17:09:45 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:44.459 17:09:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.459 17:09:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.459 17:09:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.459 17:09:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.459 17:09:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.459 17:09:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.459 17:09:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.459 17:09:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.459 17:09:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.459 17:09:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.459 17:09:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.459 17:09:45 version -- scripts/common.sh@344 -- # case "$op" in 00:08:44.459 17:09:45 version -- scripts/common.sh@345 -- # : 1 00:08:44.459 17:09:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.459 17:09:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.459 17:09:45 version -- scripts/common.sh@365 -- # decimal 1 00:08:44.718 17:09:45 version -- scripts/common.sh@353 -- # local d=1 00:08:44.718 17:09:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.718 17:09:45 version -- scripts/common.sh@355 -- # echo 1 00:08:44.718 17:09:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.718 17:09:45 version -- scripts/common.sh@366 -- # decimal 2 00:08:44.718 17:09:45 version -- scripts/common.sh@353 -- # local d=2 00:08:44.718 17:09:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.718 17:09:45 version -- scripts/common.sh@355 -- # echo 2 00:08:44.718 17:09:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.718 17:09:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.718 17:09:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.718 17:09:45 version -- scripts/common.sh@368 -- # return 0 00:08:44.718 17:09:45 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.718 17:09:45 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:44.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.718 --rc genhtml_branch_coverage=1 00:08:44.718 --rc genhtml_function_coverage=1 00:08:44.718 --rc genhtml_legend=1 00:08:44.718 --rc geninfo_all_blocks=1 00:08:44.718 --rc geninfo_unexecuted_blocks=1 00:08:44.718 00:08:44.718 ' 00:08:44.718 17:09:45 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:44.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.718 --rc genhtml_branch_coverage=1 00:08:44.718 --rc genhtml_function_coverage=1 00:08:44.718 --rc genhtml_legend=1 00:08:44.718 --rc geninfo_all_blocks=1 00:08:44.718 --rc geninfo_unexecuted_blocks=1 00:08:44.718 00:08:44.718 ' 00:08:44.718 17:09:45 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:44.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.718 --rc genhtml_branch_coverage=1 00:08:44.718 --rc genhtml_function_coverage=1 00:08:44.718 --rc genhtml_legend=1 00:08:44.718 --rc geninfo_all_blocks=1 00:08:44.718 --rc geninfo_unexecuted_blocks=1 00:08:44.718 00:08:44.718 ' 00:08:44.718 17:09:45 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:44.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.718 --rc genhtml_branch_coverage=1 00:08:44.718 --rc genhtml_function_coverage=1 00:08:44.718 --rc genhtml_legend=1 00:08:44.718 --rc geninfo_all_blocks=1 00:08:44.718 --rc geninfo_unexecuted_blocks=1 00:08:44.718 00:08:44.718 ' 00:08:44.718 17:09:45 version -- app/version.sh@17 -- # get_header_version major 00:08:44.718 17:09:45 version -- app/version.sh@14 -- # cut -f2 00:08:44.718 17:09:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.718 17:09:45 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.718 17:09:45 version -- app/version.sh@17 -- # major=25 00:08:44.718 17:09:45 version -- app/version.sh@18 -- # get_header_version minor 00:08:44.718 17:09:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.718 17:09:45 version -- app/version.sh@14 -- # cut -f2 00:08:44.718 17:09:45 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.718 17:09:45 version -- app/version.sh@18 -- # minor=1 00:08:44.718 17:09:45 version -- app/version.sh@19 -- # get_header_version patch 00:08:44.718 17:09:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.718 17:09:45 version -- app/version.sh@14 -- # cut -f2 00:08:44.718 17:09:45 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.719 17:09:45 version -- app/version.sh@19 -- # patch=0 00:08:44.719 17:09:45 version -- app/version.sh@20 -- # get_header_version suffix 00:08:44.719 17:09:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.719 17:09:45 version -- app/version.sh@14 -- # cut -f2 00:08:44.719 17:09:45 version -- app/version.sh@14 -- # tr -d '"' 00:08:44.719 17:09:45 version -- app/version.sh@20 -- # suffix=-pre 00:08:44.719 17:09:45 version -- app/version.sh@22 -- # version=25.1 00:08:44.719 17:09:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:44.719 17:09:45 version -- app/version.sh@28 -- # version=25.1rc0 00:08:44.719 17:09:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:44.719 17:09:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:44.719 17:09:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:44.719 17:09:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:44.719 00:08:44.719 real 0m0.289s 00:08:44.719 user 0m0.194s 00:08:44.719 sys 0m0.131s 00:08:44.719 ************************************ 00:08:44.719 END TEST version 00:08:44.719 ************************************ 00:08:44.719 17:09:45 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.719 17:09:45 version -- common/autotest_common.sh@10 -- # set +x 00:08:44.719 17:09:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:44.719 17:09:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:44.719 17:09:45 -- spdk/autotest.sh@194 -- # uname -s 00:08:44.719 17:09:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:44.719 17:09:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:44.719 17:09:45 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:44.719 17:09:45 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:44.719 17:09:45 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:44.719 17:09:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.719 17:09:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.719 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:08:44.719 ************************************ 00:08:44.719 START TEST spdk_dd 00:08:44.719 ************************************ 00:08:44.719 17:09:45 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:44.719 * Looking for test storage... 00:08:44.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:44.719 17:09:45 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:44.719 17:09:45 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:44.719 17:09:45 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:44.977 17:09:45 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:44.977 17:09:45 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.977 17:09:45 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.977 17:09:45 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.977 17:09:45 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.977 17:09:45 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@345 -- # : 1 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@368 -- # return 0 00:08:44.978 17:09:45 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.978 17:09:45 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.978 --rc genhtml_branch_coverage=1 00:08:44.978 --rc genhtml_function_coverage=1 00:08:44.978 --rc genhtml_legend=1 00:08:44.978 --rc geninfo_all_blocks=1 00:08:44.978 --rc geninfo_unexecuted_blocks=1 00:08:44.978 00:08:44.978 ' 00:08:44.978 17:09:45 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.978 --rc genhtml_branch_coverage=1 00:08:44.978 --rc genhtml_function_coverage=1 00:08:44.978 --rc genhtml_legend=1 00:08:44.978 --rc geninfo_all_blocks=1 00:08:44.978 --rc geninfo_unexecuted_blocks=1 00:08:44.978 00:08:44.978 ' 00:08:44.978 17:09:45 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.978 --rc genhtml_branch_coverage=1 00:08:44.978 --rc genhtml_function_coverage=1 00:08:44.978 --rc genhtml_legend=1 00:08:44.978 --rc geninfo_all_blocks=1 00:08:44.978 --rc geninfo_unexecuted_blocks=1 00:08:44.978 00:08:44.978 ' 00:08:44.978 17:09:45 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:44.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.978 --rc genhtml_branch_coverage=1 00:08:44.978 --rc genhtml_function_coverage=1 00:08:44.978 --rc genhtml_legend=1 00:08:44.978 --rc geninfo_all_blocks=1 00:08:44.978 --rc geninfo_unexecuted_blocks=1 00:08:44.978 00:08:44.978 ' 00:08:44.978 17:09:45 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.978 17:09:45 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.978 17:09:45 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.978 17:09:45 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.978 17:09:45 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.978 17:09:45 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:44.978 17:09:45 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.978 17:09:45 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:45.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:45.237 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:45.237 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:45.237 17:09:45 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:45.237 17:09:45 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:45.237 17:09:45 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:08:45.237 17:09:45 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:08:45.237 17:09:45 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:45.237 17:09:45 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:45.237 17:09:45 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:45.237 17:09:45 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:08:45.237 17:09:45 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@233 -- # local class 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@235 -- # local progif 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@236 -- # class=01 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:45.237 17:09:46 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:45.497 17:09:46 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:45.497 17:09:46 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:45.497 17:09:46 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:08:45.497 17:09:46 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:45.497 17:09:46 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:08:45.497 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:45.498 * spdk_dd linked to liburing 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:45.498 17:09:46 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:45.498 17:09:46 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:45.499 17:09:46 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:08:45.499 17:09:46 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:45.499 17:09:46 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:45.499 17:09:46 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:45.499 17:09:46 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:45.499 17:09:46 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:45.499 17:09:46 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:45.499 17:09:46 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:45.499 17:09:46 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.499 17:09:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:45.499 ************************************ 00:08:45.499 START TEST spdk_dd_basic_rw 00:08:45.499 ************************************ 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:45.499 * Looking for test storage... 00:08:45.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.499 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:08:45.759 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.759 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:08:45.759 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.760 --rc genhtml_branch_coverage=1 00:08:45.760 --rc genhtml_function_coverage=1 00:08:45.760 --rc genhtml_legend=1 00:08:45.760 --rc geninfo_all_blocks=1 00:08:45.760 --rc geninfo_unexecuted_blocks=1 00:08:45.760 00:08:45.760 ' 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.760 --rc genhtml_branch_coverage=1 00:08:45.760 --rc genhtml_function_coverage=1 00:08:45.760 --rc genhtml_legend=1 00:08:45.760 --rc geninfo_all_blocks=1 00:08:45.760 --rc geninfo_unexecuted_blocks=1 00:08:45.760 00:08:45.760 ' 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.760 --rc genhtml_branch_coverage=1 00:08:45.760 --rc genhtml_function_coverage=1 00:08:45.760 --rc genhtml_legend=1 00:08:45.760 --rc geninfo_all_blocks=1 00:08:45.760 --rc geninfo_unexecuted_blocks=1 00:08:45.760 00:08:45.760 ' 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.760 --rc genhtml_branch_coverage=1 00:08:45.760 --rc genhtml_function_coverage=1 00:08:45.760 --rc genhtml_legend=1 00:08:45.760 --rc geninfo_all_blocks=1 00:08:45.760 --rc geninfo_unexecuted_blocks=1 00:08:45.760 00:08:45.760 ' 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:45.760 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:45.761 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:45.761 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:45.762 ************************************ 00:08:45.762 START TEST dd_bs_lt_native_bs 00:08:45.762 ************************************ 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:45.762 17:09:46 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:46.021 { 00:08:46.021 "subsystems": [ 00:08:46.021 { 00:08:46.021 "subsystem": "bdev", 00:08:46.021 "config": [ 00:08:46.021 { 00:08:46.021 "params": { 00:08:46.021 "trtype": "pcie", 00:08:46.021 "traddr": "0000:00:10.0", 00:08:46.021 "name": "Nvme0" 00:08:46.021 }, 00:08:46.021 "method": "bdev_nvme_attach_controller" 00:08:46.021 }, 00:08:46.021 { 00:08:46.021 "method": "bdev_wait_for_examine" 00:08:46.021 } 00:08:46.021 ] 00:08:46.021 } 00:08:46.021 ] 00:08:46.021 } 00:08:46.021 [2024-11-04 17:09:46.596832] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:46.021 [2024-11-04 17:09:46.596927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:08:46.021 [2024-11-04 17:09:46.750177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.021 [2024-11-04 17:09:46.814496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.280 [2024-11-04 17:09:46.875024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.280 [2024-11-04 17:09:46.988990] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:46.280 [2024-11-04 17:09:46.989071] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.558 [2024-11-04 17:09:47.120798] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.558 00:08:46.558 real 0m0.646s 00:08:46.558 user 0m0.437s 00:08:46.558 sys 0m0.164s 00:08:46.558 ************************************ 00:08:46.558 END TEST dd_bs_lt_native_bs 00:08:46.558 ************************************ 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:46.558 ************************************ 00:08:46.558 START TEST dd_rw 00:08:46.558 ************************************ 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:46.558 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:46.559 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:46.559 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:47.145 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:47.145 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:47.145 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:47.145 17:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:47.145 { 00:08:47.145 "subsystems": [ 00:08:47.145 { 00:08:47.145 "subsystem": "bdev", 00:08:47.145 "config": [ 00:08:47.145 { 00:08:47.145 "params": { 00:08:47.145 "trtype": "pcie", 00:08:47.145 "traddr": "0000:00:10.0", 00:08:47.145 "name": "Nvme0" 00:08:47.145 }, 00:08:47.145 "method": "bdev_nvme_attach_controller" 00:08:47.145 }, 00:08:47.145 { 00:08:47.145 "method": "bdev_wait_for_examine" 00:08:47.145 } 00:08:47.145 ] 00:08:47.145 } 00:08:47.145 ] 00:08:47.145 } 00:08:47.145 [2024-11-04 17:09:47.867191] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:47.145 [2024-11-04 17:09:47.867817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59664 ] 00:08:47.404 [2024-11-04 17:09:48.014679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.404 [2024-11-04 17:09:48.065679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.404 [2024-11-04 17:09:48.119625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.663  [2024-11-04T17:09:48.467Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:47.663 00:08:47.663 17:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:47.663 17:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:47.663 17:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:47.663 17:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:47.922 [2024-11-04 17:09:48.471304] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:47.922 [2024-11-04 17:09:48.471655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59678 ] 00:08:47.922 { 00:08:47.922 "subsystems": [ 00:08:47.922 { 00:08:47.922 "subsystem": "bdev", 00:08:47.922 "config": [ 00:08:47.922 { 00:08:47.922 "params": { 00:08:47.922 "trtype": "pcie", 00:08:47.922 "traddr": "0000:00:10.0", 00:08:47.922 "name": "Nvme0" 00:08:47.922 }, 00:08:47.922 "method": "bdev_nvme_attach_controller" 00:08:47.922 }, 00:08:47.922 { 00:08:47.922 "method": "bdev_wait_for_examine" 00:08:47.922 } 00:08:47.922 ] 00:08:47.922 } 00:08:47.922 ] 00:08:47.922 } 00:08:47.922 [2024-11-04 17:09:48.618285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.922 [2024-11-04 17:09:48.666656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.922 [2024-11-04 17:09:48.722487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.181  [2024-11-04T17:09:49.244Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:48.440 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:48.440 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:48.440 [2024-11-04 17:09:49.092904] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:48.440 [2024-11-04 17:09:49.093225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:08:48.440 { 00:08:48.440 "subsystems": [ 00:08:48.440 { 00:08:48.440 "subsystem": "bdev", 00:08:48.440 "config": [ 00:08:48.440 { 00:08:48.440 "params": { 00:08:48.440 "trtype": "pcie", 00:08:48.440 "traddr": "0000:00:10.0", 00:08:48.440 "name": "Nvme0" 00:08:48.440 }, 00:08:48.440 "method": "bdev_nvme_attach_controller" 00:08:48.440 }, 00:08:48.440 { 00:08:48.440 "method": "bdev_wait_for_examine" 00:08:48.440 } 00:08:48.440 ] 00:08:48.440 } 00:08:48.440 ] 00:08:48.440 } 00:08:48.440 [2024-11-04 17:09:49.239382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.699 [2024-11-04 17:09:49.298983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.699 [2024-11-04 17:09:49.356035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.699  [2024-11-04T17:09:49.771Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:48.967 00:08:48.967 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:48.967 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:48.967 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:48.967 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:48.967 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:48.967 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:48.967 17:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:49.534 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:49.534 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:49.534 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:49.534 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:49.534 [2024-11-04 17:09:50.300924] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:49.534 [2024-11-04 17:09:50.301021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:08:49.534 { 00:08:49.534 "subsystems": [ 00:08:49.534 { 00:08:49.534 "subsystem": "bdev", 00:08:49.534 "config": [ 00:08:49.534 { 00:08:49.534 "params": { 00:08:49.534 "trtype": "pcie", 00:08:49.534 "traddr": "0000:00:10.0", 00:08:49.534 "name": "Nvme0" 00:08:49.534 }, 00:08:49.534 "method": "bdev_nvme_attach_controller" 00:08:49.534 }, 00:08:49.534 { 00:08:49.534 "method": "bdev_wait_for_examine" 00:08:49.534 } 00:08:49.534 ] 00:08:49.534 } 00:08:49.534 ] 00:08:49.534 } 00:08:49.793 [2024-11-04 17:09:50.447642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.793 [2024-11-04 17:09:50.501922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.793 [2024-11-04 17:09:50.557735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.052  [2024-11-04T17:09:51.116Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:50.312 00:08:50.312 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:50.312 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:50.312 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:50.312 17:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:50.312 [2024-11-04 17:09:50.911321] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:50.312 [2024-11-04 17:09:50.911445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:08:50.313 { 00:08:50.313 "subsystems": [ 00:08:50.313 { 00:08:50.313 "subsystem": "bdev", 00:08:50.313 "config": [ 00:08:50.313 { 00:08:50.313 "params": { 00:08:50.313 "trtype": "pcie", 00:08:50.313 "traddr": "0000:00:10.0", 00:08:50.313 "name": "Nvme0" 00:08:50.313 }, 00:08:50.313 "method": "bdev_nvme_attach_controller" 00:08:50.313 }, 00:08:50.313 { 00:08:50.313 "method": "bdev_wait_for_examine" 00:08:50.313 } 00:08:50.313 ] 00:08:50.313 } 00:08:50.313 ] 00:08:50.313 } 00:08:50.313 [2024-11-04 17:09:51.055793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.313 [2024-11-04 17:09:51.112610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.574 [2024-11-04 17:09:51.167240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.574  [2024-11-04T17:09:51.637Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:50.833 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:50.833 17:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:50.833 { 00:08:50.833 "subsystems": [ 00:08:50.833 { 00:08:50.833 "subsystem": "bdev", 00:08:50.833 "config": [ 00:08:50.833 { 00:08:50.833 "params": { 00:08:50.833 "trtype": "pcie", 00:08:50.833 "traddr": "0000:00:10.0", 00:08:50.833 "name": "Nvme0" 00:08:50.833 }, 00:08:50.833 "method": "bdev_nvme_attach_controller" 00:08:50.833 }, 00:08:50.833 { 00:08:50.833 "method": "bdev_wait_for_examine" 00:08:50.833 } 00:08:50.833 ] 00:08:50.833 } 00:08:50.833 ] 00:08:50.833 } 00:08:50.833 [2024-11-04 17:09:51.528584] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:50.833 [2024-11-04 17:09:51.528839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59747 ] 00:08:51.091 [2024-11-04 17:09:51.680289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.091 [2024-11-04 17:09:51.743828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.091 [2024-11-04 17:09:51.802844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.350  [2024-11-04T17:09:52.154Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:51.350 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:51.350 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:51.918 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:51.918 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:51.918 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:51.918 17:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:51.918 [2024-11-04 17:09:52.708142] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:51.918 [2024-11-04 17:09:52.708254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:08:52.177 { 00:08:52.177 "subsystems": [ 00:08:52.177 { 00:08:52.177 "subsystem": "bdev", 00:08:52.177 "config": [ 00:08:52.177 { 00:08:52.177 "params": { 00:08:52.177 "trtype": "pcie", 00:08:52.177 "traddr": "0000:00:10.0", 00:08:52.177 "name": "Nvme0" 00:08:52.177 }, 00:08:52.177 "method": "bdev_nvme_attach_controller" 00:08:52.177 }, 00:08:52.177 { 00:08:52.177 "method": "bdev_wait_for_examine" 00:08:52.177 } 00:08:52.177 ] 00:08:52.177 } 00:08:52.177 ] 00:08:52.177 } 00:08:52.177 [2024-11-04 17:09:52.862448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.177 [2024-11-04 17:09:52.937242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.437 [2024-11-04 17:09:53.003761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.437  [2024-11-04T17:09:53.500Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:52.696 00:08:52.696 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:52.696 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:52.696 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:52.696 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:52.696 { 00:08:52.696 "subsystems": [ 00:08:52.696 { 00:08:52.696 "subsystem": "bdev", 00:08:52.696 "config": [ 00:08:52.696 { 00:08:52.696 "params": { 00:08:52.696 "trtype": "pcie", 00:08:52.696 "traddr": "0000:00:10.0", 00:08:52.696 "name": "Nvme0" 00:08:52.696 }, 00:08:52.696 "method": "bdev_nvme_attach_controller" 00:08:52.696 }, 00:08:52.696 { 00:08:52.696 "method": "bdev_wait_for_examine" 00:08:52.696 } 00:08:52.696 ] 00:08:52.696 } 00:08:52.696 ] 00:08:52.696 } 00:08:52.696 [2024-11-04 17:09:53.367141] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:52.696 [2024-11-04 17:09:53.367937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59785 ] 00:08:52.955 [2024-11-04 17:09:53.514369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.955 [2024-11-04 17:09:53.578355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.955 [2024-11-04 17:09:53.634859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.955  [2024-11-04T17:09:54.018Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:53.214 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:53.214 17:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:53.214 [2024-11-04 17:09:53.993438] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:53.214 [2024-11-04 17:09:53.993705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59800 ] 00:08:53.214 { 00:08:53.214 "subsystems": [ 00:08:53.214 { 00:08:53.214 "subsystem": "bdev", 00:08:53.214 "config": [ 00:08:53.214 { 00:08:53.214 "params": { 00:08:53.214 "trtype": "pcie", 00:08:53.214 "traddr": "0000:00:10.0", 00:08:53.214 "name": "Nvme0" 00:08:53.214 }, 00:08:53.214 "method": "bdev_nvme_attach_controller" 00:08:53.214 }, 00:08:53.214 { 00:08:53.214 "method": "bdev_wait_for_examine" 00:08:53.214 } 00:08:53.214 ] 00:08:53.214 } 00:08:53.214 ] 00:08:53.214 } 00:08:53.473 [2024-11-04 17:09:54.143491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.473 [2024-11-04 17:09:54.201994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.473 [2024-11-04 17:09:54.262860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.732  [2024-11-04T17:09:54.794Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:53.990 00:08:53.990 17:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:53.990 17:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:53.991 17:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:53.991 17:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:53.991 17:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:53.991 17:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:53.991 17:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:54.558 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:54.558 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:54.558 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:54.558 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:54.558 [2024-11-04 17:09:55.187743] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:54.558 [2024-11-04 17:09:55.188028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59825 ] 00:08:54.558 { 00:08:54.558 "subsystems": [ 00:08:54.558 { 00:08:54.558 "subsystem": "bdev", 00:08:54.558 "config": [ 00:08:54.558 { 00:08:54.558 "params": { 00:08:54.558 "trtype": "pcie", 00:08:54.558 "traddr": "0000:00:10.0", 00:08:54.558 "name": "Nvme0" 00:08:54.558 }, 00:08:54.558 "method": "bdev_nvme_attach_controller" 00:08:54.558 }, 00:08:54.558 { 00:08:54.558 "method": "bdev_wait_for_examine" 00:08:54.558 } 00:08:54.558 ] 00:08:54.558 } 00:08:54.558 ] 00:08:54.558 } 00:08:54.558 [2024-11-04 17:09:55.335515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.817 [2024-11-04 17:09:55.392464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.817 [2024-11-04 17:09:55.451041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.817  [2024-11-04T17:09:55.880Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:55.076 00:08:55.076 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:55.076 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:55.076 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:55.076 17:09:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:55.076 { 00:08:55.076 "subsystems": [ 00:08:55.076 { 00:08:55.076 "subsystem": "bdev", 00:08:55.076 "config": [ 00:08:55.076 { 00:08:55.076 "params": { 00:08:55.076 "trtype": "pcie", 00:08:55.076 "traddr": "0000:00:10.0", 00:08:55.076 "name": "Nvme0" 00:08:55.076 }, 00:08:55.076 "method": "bdev_nvme_attach_controller" 00:08:55.076 }, 00:08:55.076 { 00:08:55.076 "method": "bdev_wait_for_examine" 00:08:55.076 } 00:08:55.076 ] 00:08:55.076 } 00:08:55.076 ] 00:08:55.076 } 00:08:55.076 [2024-11-04 17:09:55.809353] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:55.076 [2024-11-04 17:09:55.810098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:08:55.335 [2024-11-04 17:09:55.955515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.335 [2024-11-04 17:09:56.018181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.335 [2024-11-04 17:09:56.080699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.594  [2024-11-04T17:09:56.398Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:55.594 00:08:55.594 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.594 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:55.854 17:09:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:55.854 [2024-11-04 17:09:56.455506] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:55.854 [2024-11-04 17:09:56.455770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:08:55.854 { 00:08:55.854 "subsystems": [ 00:08:55.854 { 00:08:55.854 "subsystem": "bdev", 00:08:55.854 "config": [ 00:08:55.854 { 00:08:55.854 "params": { 00:08:55.854 "trtype": "pcie", 00:08:55.854 "traddr": "0000:00:10.0", 00:08:55.854 "name": "Nvme0" 00:08:55.854 }, 00:08:55.854 "method": "bdev_nvme_attach_controller" 00:08:55.854 }, 00:08:55.854 { 00:08:55.854 "method": "bdev_wait_for_examine" 00:08:55.854 } 00:08:55.854 ] 00:08:55.854 } 00:08:55.854 ] 00:08:55.854 } 00:08:55.854 [2024-11-04 17:09:56.603321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.113 [2024-11-04 17:09:56.662203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.113 [2024-11-04 17:09:56.719845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.113  [2024-11-04T17:09:57.176Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:56.372 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:56.372 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:56.942 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:56.942 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:56.942 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:56.942 17:09:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:56.942 { 00:08:56.942 "subsystems": [ 00:08:56.942 { 00:08:56.942 "subsystem": "bdev", 00:08:56.942 "config": [ 00:08:56.942 { 00:08:56.942 "params": { 00:08:56.942 "trtype": "pcie", 00:08:56.942 "traddr": "0000:00:10.0", 00:08:56.942 "name": "Nvme0" 00:08:56.942 }, 00:08:56.942 "method": "bdev_nvme_attach_controller" 00:08:56.942 }, 00:08:56.942 { 00:08:56.942 "method": "bdev_wait_for_examine" 00:08:56.942 } 00:08:56.942 ] 00:08:56.942 } 00:08:56.942 ] 00:08:56.942 } 00:08:56.942 [2024-11-04 17:09:57.544082] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:56.942 [2024-11-04 17:09:57.544370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59873 ] 00:08:56.942 [2024-11-04 17:09:57.691475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.201 [2024-11-04 17:09:57.753297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.201 [2024-11-04 17:09:57.807897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.201  [2024-11-04T17:09:58.264Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:57.460 00:08:57.460 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:57.460 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:57.460 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:57.460 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:57.460 [2024-11-04 17:09:58.187998] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:57.460 [2024-11-04 17:09:58.188272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59892 ] 00:08:57.460 { 00:08:57.460 "subsystems": [ 00:08:57.460 { 00:08:57.460 "subsystem": "bdev", 00:08:57.460 "config": [ 00:08:57.460 { 00:08:57.460 "params": { 00:08:57.460 "trtype": "pcie", 00:08:57.460 "traddr": "0000:00:10.0", 00:08:57.460 "name": "Nvme0" 00:08:57.460 }, 00:08:57.460 "method": "bdev_nvme_attach_controller" 00:08:57.460 }, 00:08:57.460 { 00:08:57.460 "method": "bdev_wait_for_examine" 00:08:57.460 } 00:08:57.460 ] 00:08:57.460 } 00:08:57.460 ] 00:08:57.460 } 00:08:57.719 [2024-11-04 17:09:58.336982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.719 [2024-11-04 17:09:58.392274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.719 [2024-11-04 17:09:58.451529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.977  [2024-11-04T17:09:58.781Z] Copying: 48/48 [kB] (average 23 MBps) 00:08:57.977 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:57.977 17:09:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:58.270 { 00:08:58.270 "subsystems": [ 00:08:58.270 { 00:08:58.270 "subsystem": "bdev", 00:08:58.270 "config": [ 00:08:58.270 { 00:08:58.270 "params": { 00:08:58.270 "trtype": "pcie", 00:08:58.270 "traddr": "0000:00:10.0", 00:08:58.270 "name": "Nvme0" 00:08:58.270 }, 00:08:58.270 "method": "bdev_nvme_attach_controller" 00:08:58.270 }, 00:08:58.270 { 00:08:58.270 "method": "bdev_wait_for_examine" 00:08:58.270 } 00:08:58.270 ] 00:08:58.270 } 00:08:58.270 ] 00:08:58.270 } 00:08:58.270 [2024-11-04 17:09:58.806489] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:58.270 [2024-11-04 17:09:58.806717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:08:58.270 [2024-11-04 17:09:58.957502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.270 [2024-11-04 17:09:59.006604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.270 [2024-11-04 17:09:59.070356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.529  [2024-11-04T17:09:59.592Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:58.788 00:08:58.788 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:58.788 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:58.788 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:58.788 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:58.788 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:58.788 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:58.788 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:59.046 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:59.046 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:59.046 17:09:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 { 00:08:59.046 "subsystems": [ 00:08:59.046 { 00:08:59.046 "subsystem": "bdev", 00:08:59.046 "config": [ 00:08:59.046 { 00:08:59.046 "params": { 00:08:59.046 "trtype": "pcie", 00:08:59.046 "traddr": "0000:00:10.0", 00:08:59.046 "name": "Nvme0" 00:08:59.046 }, 00:08:59.046 "method": "bdev_nvme_attach_controller" 00:08:59.046 }, 00:08:59.046 { 00:08:59.046 "method": "bdev_wait_for_examine" 00:08:59.046 } 00:08:59.046 ] 00:08:59.046 } 00:08:59.046 ] 00:08:59.046 } 00:08:59.046 [2024-11-04 17:09:59.843539] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:59.046 [2024-11-04 17:09:59.843670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59921 ] 00:08:59.304 [2024-11-04 17:09:59.989517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.304 [2024-11-04 17:10:00.049903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.563 [2024-11-04 17:10:00.118153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.563  [2024-11-04T17:10:00.625Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:59.821 00:08:59.821 17:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:59.821 17:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:59.821 17:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:59.821 17:10:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:59.821 { 00:08:59.821 "subsystems": [ 00:08:59.821 { 00:08:59.821 "subsystem": "bdev", 00:08:59.821 "config": [ 00:08:59.821 { 00:08:59.821 "params": { 00:08:59.821 "trtype": "pcie", 00:08:59.821 "traddr": "0000:00:10.0", 00:08:59.821 "name": "Nvme0" 00:08:59.821 }, 00:08:59.821 "method": "bdev_nvme_attach_controller" 00:08:59.821 }, 00:08:59.821 { 00:08:59.821 "method": "bdev_wait_for_examine" 00:08:59.821 } 00:08:59.821 ] 00:08:59.821 } 00:08:59.821 ] 00:08:59.821 } 00:08:59.821 [2024-11-04 17:10:00.471009] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:08:59.821 [2024-11-04 17:10:00.471103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59940 ] 00:08:59.821 [2024-11-04 17:10:00.616688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.080 [2024-11-04 17:10:00.660813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.080 [2024-11-04 17:10:00.715869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.080  [2024-11-04T17:10:01.144Z] Copying: 48/48 [kB] (average 46 MBps) 00:09:00.340 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:00.340 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:00.340 { 00:09:00.340 "subsystems": [ 00:09:00.340 { 00:09:00.340 "subsystem": "bdev", 00:09:00.340 "config": [ 00:09:00.340 { 00:09:00.340 "params": { 00:09:00.340 "trtype": "pcie", 00:09:00.340 "traddr": "0000:00:10.0", 00:09:00.340 "name": "Nvme0" 00:09:00.340 }, 00:09:00.340 "method": "bdev_nvme_attach_controller" 00:09:00.340 }, 00:09:00.340 { 00:09:00.340 "method": "bdev_wait_for_examine" 00:09:00.340 } 00:09:00.340 ] 00:09:00.340 } 00:09:00.340 ] 00:09:00.340 } 00:09:00.340 [2024-11-04 17:10:01.060570] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:00.340 [2024-11-04 17:10:01.061029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59950 ] 00:09:00.599 [2024-11-04 17:10:01.214654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.599 [2024-11-04 17:10:01.268670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.599 [2024-11-04 17:10:01.327239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.858  [2024-11-04T17:10:01.662Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:00.858 00:09:00.858 00:09:00.858 real 0m14.389s 00:09:00.858 user 0m10.416s 00:09:00.858 sys 0m5.535s 00:09:00.858 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.858 ************************************ 00:09:00.858 END TEST dd_rw 00:09:00.858 ************************************ 00:09:00.858 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:01.117 ************************************ 00:09:01.117 START TEST dd_rw_offset 00:09:01.117 ************************************ 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=leiuwl9cwruc5xvmd8w2kdp9f3jn3oax9e1ul2vvrbchq0v8r65qx3i2pdggi80qzgrbsv9hkjoe5ok291jus5a0d80w3ocxwtcqie5a6gk5iiwm972djvbq5bn8ta35ui4yp2va6r6lgzz5aexh81a0clq9w8kjajaazo7kg10jbeqoga4t258s1qd1oqn418ooxde4bdvjy3zrt3j2k82hb1wq35kklluxwreu7q0k0ruzry9kdka0n8f1fduhkn6qc0sj5aqnmxowj3q3boixc3zk7zm6b3alebhi59h3e7kd7dwu9tx024h1n24wg7rhesnpp7ymkq9f5ck10jb0egamjfb4ccjxw33v1o8a33n04qqj0apddzms8bibhkvnjwo7gvwr5mtl6xpzlwsrzr59dflfd8ytrdl0m1kow2l2hwwfpw4eaj1t8vzy4l1gpxhny1ka5rat0tiw0njqv7j0ij1w0x2x518lf1az3vdawwz2xs1hclxsscvq224yjdugvkbtrhputhfhl1mw33wk7peep15ljuuqu9pqzicu9zxw26qjr8nkzkjfuko2mqenew0vieh4p6z00l6lnzos99uk8ahy9ooshhxv58pe7nlco9l4xch3m104i7wd9qa5no9v92nx8jlrfh5opqxc8lohju3m2ysyd09odn63sa7y5ipxirvuyf519w69pf35pto7q6q58hod88rs7rqveycnjcce87pmw6ram84um3ywlhbbkefrmnqr1ksoskc4y1ed0fvqg7qxuquylj1wgf4be6067a15l605j8ot14ezazilhg7747mwcd961h55u34gsegwna21frme3krtgrw1w7yalovibhjtvdlyysclij8ffq0siayu4wbc7qu9vktcgd8f2fklsi5y56o5aigg0btzsahbmou9e4x6q9qmpjmn27le5vnly5me4wjr30le8hqq3u4veso10ocfbims2fojfm0hdiw38rl58zw25zr579fk24z2uxlpz5z9kugfvfnze8lssgpxx6858eha0bo32md4qm33w4cmbjlxw754du1mj2fggqd0zpb6369cbg8wwyym6tba6a3dfg9cfa283nznayq7phm2aac979tcwb8rhw3drlbs72niry88zck7viu04fitgrofeld708cq43kdelw9pd9w5gu0dvi0t1lfb5l6p6531vrp7lzo6tb8cb7je0vtrhfbth0wmw1mhzsqdtfe3etpm2g3gi7vtuslzmcjuf8z7wten95q67rv5lvaij89rwu2yq80vp5tdriuy6189jwt69b6xjmxsbh758xlfcfn0a1kw5zcnbvugeoyj4zjbowhn75j04ulzg2v33v8lkc50it3sjdzz2utg6tow08mfoof9bkvcz7yy4tuzo0ykqprnkofptl23uwwhrdoc3xqia0enncfx7nwmesm9vqbfst05ncfy0mwz4s82hr1by6cvosbsne2lisd5k87nyauax7s4eoriwyc2pjjtgo2wqcjb7bjbixnn19e79pnizcgn2ara0htpwd18o09h3biymvg10c8aa9zgydchoqzimab6b01ng5aq9h12hojrw00zr0kfuotsl4vsgw6hqvkp8niknjufcxulzxly8wvu9sc7qqijpj4f7kx8mthwed60f735pdkoxfi31fom8f5u24xifzb0e4pnipih1ml109g4200vb53amfv3yt8gda65z0e26v3w1459o0qwc25erd2p1wwn4ue2ad1wa4deqwp5o9imh08v9hdab2jsjjzvh396hjjw26i6tzf6tag8nv2vg1yazcork1vrmqgdpqsb6kz14xtfay0j2t06su5ssv2m6dmuztrz8tshvifx924e4dehjbtw8zid80hrk1d7wnwa8e4h8vh5rb3jpkor00qzhjekvvt5oc5720z5k7q2czryin81wrabk20r7jzn45fmhuqsmcm3gpo3a6u2iwox66us2hvblc0kcl4tpao7ih6dvj37szuazswh3gsmjpdg2v70o05fb5pn21hi83a2w3guhv8quo6h05mtq87aj2f1hqb8jd1b438rhsfbxmch7655tsuuetinowxs04bzus9w6n8d19rhvvj4dr4obkvbsv9sb9l0x78d2t92g090csrgruax8r7lh6n6jks4ab0pu15c61l63vzo1etfj71gy6v2wq9d01oi7r3mq3n3ewkyv95jbcqkwyhbrs68l83rdt5dy4topyguy8579xcdcll2p3gewuevowlguxxv76r0s0n05gabqck0z8h3qk8tog40cz45w7opknxi4qojh1ieeppam4haf2jflwypjzydtkhjpy1wgo7i0tdbynjfz4v4ghc02sstgsqh9ssttf4b9in3llee7fv1i7ackbd8hkxyl0olfsr9qh3s3cu8sjg9mubq0wcs6rmmhdhcm5jk8h3ji622h4i6o4agqmzzui36dce5vpb01p2pw2fsyo7tu1gsik1u77ns08audcff902l1ccgnd6clo611o7g3l5f7p7we8ptn2xql97nd9kcem9qccu6kjb5mgn4zur36yia7otxylai8wvyjlgfv7f0z2vck726cfuo0asjd791i1n5sc2buljlbwyb1yff26xehrr9nlqwe1jgkljfo592sk9vr0ip6qik08sdhfw9uhbhxsu4ax45sukff4frk072n4stm6m7cafduzu12rpllvavgsf83975tn8zeulzmyceei8fyfpklm9df8mvi1m1v3ncf3cotwrzc2exuzz47ii0cwsianv3wkd06inmv61ffasipw7mmjnj80t3zgcryg7dq4bcraiprzy96mla6rthefec8ibfw2lpifys517gkwszbp3uotx8r7sg99lpaizxij4porz7s0wg763ackfh5fk64yjtlbemanndz3njds9qmbebtkfztfhbbjh67ctwipk6endzn470e4h8ex18il8emiomxalbx4iu1pgelh81sq2iifhsjs9qgmtxwkolx6th3nbf8puvoi0slhzl91umxcdc8993x1bkxue1b85eyi39uz9tbu44dzj781n2ueaakqxdow5h2ug62grv6329culstxvu9divc48vhw3tnoodwtkvie3orb9de4z9pgy6zj9jqtptrrlwd9i9dv5vhhqmkk7c6330h8dqju7jz9rvnmlbmfr0tzm2gwzkicj5e8q6zx5g90vf6c2tmb2gthk0lfsu3oe2fxs5mw8bppgacmfkzkvseygtahjtz59dt7d0vgczt1wd9gxmcmie0vp5nvmrg0qj3nkg84vaitwpie1uog6khq44l1nf4x5kh7nte3cca0wlgz0w5b0mdmtmjbvy9qne4ebz5j57oh84phno7kcytslme3ravocautynxpem234jud46tfx4zfcnfbxgj1qbc06kogvhlorz9t66d36i7vi7mogippds2ojneqmf6fwz426xirg7zjo6500yf6h94vtmeohy4oswjrn9bwz7uxfo48u10w085b8c3kc6r0v2vwspv17ffuwcr54bt3cf4ra9foxytyrn9sflvqlyirvdldolc4pg2r113jvqi8e391dht4s2r4rkxpdlu7mbjji6mx00cfhzahp7627pi48saeb7ub25pusvula96zjyqbjwslapj4efwj1ry74icgpi3o1a91pogwhl1ccuz2p8tna16jqi0al7rj6x34dk8p9kih5bil2tbtrl8yyuarj1wnk7js25ilu9gylx6fq3ggstqdzakerxj83ma3e2e0uk4r66bsmq21ycab5i8afucxk8q5nkca8o7c6gbaakjk0lbq79yy8x48ptg2tv7we2gepciy4ynadozqetz02f7t4umvmcjczvdlmdorr9vha5hz7wk2prptdh699roi8yim1gy62odujofvpt15cpv1n871mpwww19ys265sep5k6kbw82w9z8ksk0s7m8w2oinbbn6xtolhu1mtbcxpqhamtrxo7tly8d5qf48r 00:09:01.117 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:09:01.118 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:09:01.118 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:01.118 17:10:01 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:01.118 [2024-11-04 17:10:01.780935] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:01.118 [2024-11-04 17:10:01.781188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59986 ] 00:09:01.118 { 00:09:01.118 "subsystems": [ 00:09:01.118 { 00:09:01.118 "subsystem": "bdev", 00:09:01.118 "config": [ 00:09:01.118 { 00:09:01.118 "params": { 00:09:01.118 "trtype": "pcie", 00:09:01.118 "traddr": "0000:00:10.0", 00:09:01.118 "name": "Nvme0" 00:09:01.118 }, 00:09:01.118 "method": "bdev_nvme_attach_controller" 00:09:01.118 }, 00:09:01.118 { 00:09:01.118 "method": "bdev_wait_for_examine" 00:09:01.118 } 00:09:01.118 ] 00:09:01.118 } 00:09:01.118 ] 00:09:01.118 } 00:09:01.376 [2024-11-04 17:10:01.925392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.376 [2024-11-04 17:10:01.985700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.376 [2024-11-04 17:10:02.040978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.376  [2024-11-04T17:10:02.440Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:09:01.636 00:09:01.636 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:09:01.636 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:09:01.636 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:01.636 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:01.636 { 00:09:01.636 "subsystems": [ 00:09:01.636 { 00:09:01.636 "subsystem": "bdev", 00:09:01.636 "config": [ 00:09:01.636 { 00:09:01.636 "params": { 00:09:01.636 "trtype": "pcie", 00:09:01.636 "traddr": "0000:00:10.0", 00:09:01.636 "name": "Nvme0" 00:09:01.636 }, 00:09:01.636 "method": "bdev_nvme_attach_controller" 00:09:01.636 }, 00:09:01.636 { 00:09:01.636 "method": "bdev_wait_for_examine" 00:09:01.636 } 00:09:01.636 ] 00:09:01.636 } 00:09:01.636 ] 00:09:01.636 } 00:09:01.636 [2024-11-04 17:10:02.409637] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:01.636 [2024-11-04 17:10:02.409738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60005 ] 00:09:01.895 [2024-11-04 17:10:02.552458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.895 [2024-11-04 17:10:02.608895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.895 [2024-11-04 17:10:02.667765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.193  [2024-11-04T17:10:02.997Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:09:02.193 00:09:02.193 ************************************ 00:09:02.193 END TEST dd_rw_offset 00:09:02.193 ************************************ 00:09:02.193 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:09:02.193 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ leiuwl9cwruc5xvmd8w2kdp9f3jn3oax9e1ul2vvrbchq0v8r65qx3i2pdggi80qzgrbsv9hkjoe5ok291jus5a0d80w3ocxwtcqie5a6gk5iiwm972djvbq5bn8ta35ui4yp2va6r6lgzz5aexh81a0clq9w8kjajaazo7kg10jbeqoga4t258s1qd1oqn418ooxde4bdvjy3zrt3j2k82hb1wq35kklluxwreu7q0k0ruzry9kdka0n8f1fduhkn6qc0sj5aqnmxowj3q3boixc3zk7zm6b3alebhi59h3e7kd7dwu9tx024h1n24wg7rhesnpp7ymkq9f5ck10jb0egamjfb4ccjxw33v1o8a33n04qqj0apddzms8bibhkvnjwo7gvwr5mtl6xpzlwsrzr59dflfd8ytrdl0m1kow2l2hwwfpw4eaj1t8vzy4l1gpxhny1ka5rat0tiw0njqv7j0ij1w0x2x518lf1az3vdawwz2xs1hclxsscvq224yjdugvkbtrhputhfhl1mw33wk7peep15ljuuqu9pqzicu9zxw26qjr8nkzkjfuko2mqenew0vieh4p6z00l6lnzos99uk8ahy9ooshhxv58pe7nlco9l4xch3m104i7wd9qa5no9v92nx8jlrfh5opqxc8lohju3m2ysyd09odn63sa7y5ipxirvuyf519w69pf35pto7q6q58hod88rs7rqveycnjcce87pmw6ram84um3ywlhbbkefrmnqr1ksoskc4y1ed0fvqg7qxuquylj1wgf4be6067a15l605j8ot14ezazilhg7747mwcd961h55u34gsegwna21frme3krtgrw1w7yalovibhjtvdlyysclij8ffq0siayu4wbc7qu9vktcgd8f2fklsi5y56o5aigg0btzsahbmou9e4x6q9qmpjmn27le5vnly5me4wjr30le8hqq3u4veso10ocfbims2fojfm0hdiw38rl58zw25zr579fk24z2uxlpz5z9kugfvfnze8lssgpxx6858eha0bo32md4qm33w4cmbjlxw754du1mj2fggqd0zpb6369cbg8wwyym6tba6a3dfg9cfa283nznayq7phm2aac979tcwb8rhw3drlbs72niry88zck7viu04fitgrofeld708cq43kdelw9pd9w5gu0dvi0t1lfb5l6p6531vrp7lzo6tb8cb7je0vtrhfbth0wmw1mhzsqdtfe3etpm2g3gi7vtuslzmcjuf8z7wten95q67rv5lvaij89rwu2yq80vp5tdriuy6189jwt69b6xjmxsbh758xlfcfn0a1kw5zcnbvugeoyj4zjbowhn75j04ulzg2v33v8lkc50it3sjdzz2utg6tow08mfoof9bkvcz7yy4tuzo0ykqprnkofptl23uwwhrdoc3xqia0enncfx7nwmesm9vqbfst05ncfy0mwz4s82hr1by6cvosbsne2lisd5k87nyauax7s4eoriwyc2pjjtgo2wqcjb7bjbixnn19e79pnizcgn2ara0htpwd18o09h3biymvg10c8aa9zgydchoqzimab6b01ng5aq9h12hojrw00zr0kfuotsl4vsgw6hqvkp8niknjufcxulzxly8wvu9sc7qqijpj4f7kx8mthwed60f735pdkoxfi31fom8f5u24xifzb0e4pnipih1ml109g4200vb53amfv3yt8gda65z0e26v3w1459o0qwc25erd2p1wwn4ue2ad1wa4deqwp5o9imh08v9hdab2jsjjzvh396hjjw26i6tzf6tag8nv2vg1yazcork1vrmqgdpqsb6kz14xtfay0j2t06su5ssv2m6dmuztrz8tshvifx924e4dehjbtw8zid80hrk1d7wnwa8e4h8vh5rb3jpkor00qzhjekvvt5oc5720z5k7q2czryin81wrabk20r7jzn45fmhuqsmcm3gpo3a6u2iwox66us2hvblc0kcl4tpao7ih6dvj37szuazswh3gsmjpdg2v70o05fb5pn21hi83a2w3guhv8quo6h05mtq87aj2f1hqb8jd1b438rhsfbxmch7655tsuuetinowxs04bzus9w6n8d19rhvvj4dr4obkvbsv9sb9l0x78d2t92g090csrgruax8r7lh6n6jks4ab0pu15c61l63vzo1etfj71gy6v2wq9d01oi7r3mq3n3ewkyv95jbcqkwyhbrs68l83rdt5dy4topyguy8579xcdcll2p3gewuevowlguxxv76r0s0n05gabqck0z8h3qk8tog40cz45w7opknxi4qojh1ieeppam4haf2jflwypjzydtkhjpy1wgo7i0tdbynjfz4v4ghc02sstgsqh9ssttf4b9in3llee7fv1i7ackbd8hkxyl0olfsr9qh3s3cu8sjg9mubq0wcs6rmmhdhcm5jk8h3ji622h4i6o4agqmzzui36dce5vpb01p2pw2fsyo7tu1gsik1u77ns08audcff902l1ccgnd6clo611o7g3l5f7p7we8ptn2xql97nd9kcem9qccu6kjb5mgn4zur36yia7otxylai8wvyjlgfv7f0z2vck726cfuo0asjd791i1n5sc2buljlbwyb1yff26xehrr9nlqwe1jgkljfo592sk9vr0ip6qik08sdhfw9uhbhxsu4ax45sukff4frk072n4stm6m7cafduzu12rpllvavgsf83975tn8zeulzmyceei8fyfpklm9df8mvi1m1v3ncf3cotwrzc2exuzz47ii0cwsianv3wkd06inmv61ffasipw7mmjnj80t3zgcryg7dq4bcraiprzy96mla6rthefec8ibfw2lpifys517gkwszbp3uotx8r7sg99lpaizxij4porz7s0wg763ackfh5fk64yjtlbemanndz3njds9qmbebtkfztfhbbjh67ctwipk6endzn470e4h8ex18il8emiomxalbx4iu1pgelh81sq2iifhsjs9qgmtxwkolx6th3nbf8puvoi0slhzl91umxcdc8993x1bkxue1b85eyi39uz9tbu44dzj781n2ueaakqxdow5h2ug62grv6329culstxvu9divc48vhw3tnoodwtkvie3orb9de4z9pgy6zj9jqtptrrlwd9i9dv5vhhqmkk7c6330h8dqju7jz9rvnmlbmfr0tzm2gwzkicj5e8q6zx5g90vf6c2tmb2gthk0lfsu3oe2fxs5mw8bppgacmfkzkvseygtahjtz59dt7d0vgczt1wd9gxmcmie0vp5nvmrg0qj3nkg84vaitwpie1uog6khq44l1nf4x5kh7nte3cca0wlgz0w5b0mdmtmjbvy9qne4ebz5j57oh84phno7kcytslme3ravocautynxpem234jud46tfx4zfcnfbxgj1qbc06kogvhlorz9t66d36i7vi7mogippds2ojneqmf6fwz426xirg7zjo6500yf6h94vtmeohy4oswjrn9bwz7uxfo48u10w085b8c3kc6r0v2vwspv17ffuwcr54bt3cf4ra9foxytyrn9sflvqlyirvdldolc4pg2r113jvqi8e391dht4s2r4rkxpdlu7mbjji6mx00cfhzahp7627pi48saeb7ub25pusvula96zjyqbjwslapj4efwj1ry74icgpi3o1a91pogwhl1ccuz2p8tna16jqi0al7rj6x34dk8p9kih5bil2tbtrl8yyuarj1wnk7js25ilu9gylx6fq3ggstqdzakerxj83ma3e2e0uk4r66bsmq21ycab5i8afucxk8q5nkca8o7c6gbaakjk0lbq79yy8x48ptg2tv7we2gepciy4ynadozqetz02f7t4umvmcjczvdlmdorr9vha5hz7wk2prptdh699roi8yim1gy62odujofvpt15cpv1n871mpwww19ys265sep5k6kbw82w9z8ksk0s7m8w2oinbbn6xtolhu1mtbcxpqhamtrxo7tly8d5qf48r == \l\e\i\u\w\l\9\c\w\r\u\c\5\x\v\m\d\8\w\2\k\d\p\9\f\3\j\n\3\o\a\x\9\e\1\u\l\2\v\v\r\b\c\h\q\0\v\8\r\6\5\q\x\3\i\2\p\d\g\g\i\8\0\q\z\g\r\b\s\v\9\h\k\j\o\e\5\o\k\2\9\1\j\u\s\5\a\0\d\8\0\w\3\o\c\x\w\t\c\q\i\e\5\a\6\g\k\5\i\i\w\m\9\7\2\d\j\v\b\q\5\b\n\8\t\a\3\5\u\i\4\y\p\2\v\a\6\r\6\l\g\z\z\5\a\e\x\h\8\1\a\0\c\l\q\9\w\8\k\j\a\j\a\a\z\o\7\k\g\1\0\j\b\e\q\o\g\a\4\t\2\5\8\s\1\q\d\1\o\q\n\4\1\8\o\o\x\d\e\4\b\d\v\j\y\3\z\r\t\3\j\2\k\8\2\h\b\1\w\q\3\5\k\k\l\l\u\x\w\r\e\u\7\q\0\k\0\r\u\z\r\y\9\k\d\k\a\0\n\8\f\1\f\d\u\h\k\n\6\q\c\0\s\j\5\a\q\n\m\x\o\w\j\3\q\3\b\o\i\x\c\3\z\k\7\z\m\6\b\3\a\l\e\b\h\i\5\9\h\3\e\7\k\d\7\d\w\u\9\t\x\0\2\4\h\1\n\2\4\w\g\7\r\h\e\s\n\p\p\7\y\m\k\q\9\f\5\c\k\1\0\j\b\0\e\g\a\m\j\f\b\4\c\c\j\x\w\3\3\v\1\o\8\a\3\3\n\0\4\q\q\j\0\a\p\d\d\z\m\s\8\b\i\b\h\k\v\n\j\w\o\7\g\v\w\r\5\m\t\l\6\x\p\z\l\w\s\r\z\r\5\9\d\f\l\f\d\8\y\t\r\d\l\0\m\1\k\o\w\2\l\2\h\w\w\f\p\w\4\e\a\j\1\t\8\v\z\y\4\l\1\g\p\x\h\n\y\1\k\a\5\r\a\t\0\t\i\w\0\n\j\q\v\7\j\0\i\j\1\w\0\x\2\x\5\1\8\l\f\1\a\z\3\v\d\a\w\w\z\2\x\s\1\h\c\l\x\s\s\c\v\q\2\2\4\y\j\d\u\g\v\k\b\t\r\h\p\u\t\h\f\h\l\1\m\w\3\3\w\k\7\p\e\e\p\1\5\l\j\u\u\q\u\9\p\q\z\i\c\u\9\z\x\w\2\6\q\j\r\8\n\k\z\k\j\f\u\k\o\2\m\q\e\n\e\w\0\v\i\e\h\4\p\6\z\0\0\l\6\l\n\z\o\s\9\9\u\k\8\a\h\y\9\o\o\s\h\h\x\v\5\8\p\e\7\n\l\c\o\9\l\4\x\c\h\3\m\1\0\4\i\7\w\d\9\q\a\5\n\o\9\v\9\2\n\x\8\j\l\r\f\h\5\o\p\q\x\c\8\l\o\h\j\u\3\m\2\y\s\y\d\0\9\o\d\n\6\3\s\a\7\y\5\i\p\x\i\r\v\u\y\f\5\1\9\w\6\9\p\f\3\5\p\t\o\7\q\6\q\5\8\h\o\d\8\8\r\s\7\r\q\v\e\y\c\n\j\c\c\e\8\7\p\m\w\6\r\a\m\8\4\u\m\3\y\w\l\h\b\b\k\e\f\r\m\n\q\r\1\k\s\o\s\k\c\4\y\1\e\d\0\f\v\q\g\7\q\x\u\q\u\y\l\j\1\w\g\f\4\b\e\6\0\6\7\a\1\5\l\6\0\5\j\8\o\t\1\4\e\z\a\z\i\l\h\g\7\7\4\7\m\w\c\d\9\6\1\h\5\5\u\3\4\g\s\e\g\w\n\a\2\1\f\r\m\e\3\k\r\t\g\r\w\1\w\7\y\a\l\o\v\i\b\h\j\t\v\d\l\y\y\s\c\l\i\j\8\f\f\q\0\s\i\a\y\u\4\w\b\c\7\q\u\9\v\k\t\c\g\d\8\f\2\f\k\l\s\i\5\y\5\6\o\5\a\i\g\g\0\b\t\z\s\a\h\b\m\o\u\9\e\4\x\6\q\9\q\m\p\j\m\n\2\7\l\e\5\v\n\l\y\5\m\e\4\w\j\r\3\0\l\e\8\h\q\q\3\u\4\v\e\s\o\1\0\o\c\f\b\i\m\s\2\f\o\j\f\m\0\h\d\i\w\3\8\r\l\5\8\z\w\2\5\z\r\5\7\9\f\k\2\4\z\2\u\x\l\p\z\5\z\9\k\u\g\f\v\f\n\z\e\8\l\s\s\g\p\x\x\6\8\5\8\e\h\a\0\b\o\3\2\m\d\4\q\m\3\3\w\4\c\m\b\j\l\x\w\7\5\4\d\u\1\m\j\2\f\g\g\q\d\0\z\p\b\6\3\6\9\c\b\g\8\w\w\y\y\m\6\t\b\a\6\a\3\d\f\g\9\c\f\a\2\8\3\n\z\n\a\y\q\7\p\h\m\2\a\a\c\9\7\9\t\c\w\b\8\r\h\w\3\d\r\l\b\s\7\2\n\i\r\y\8\8\z\c\k\7\v\i\u\0\4\f\i\t\g\r\o\f\e\l\d\7\0\8\c\q\4\3\k\d\e\l\w\9\p\d\9\w\5\g\u\0\d\v\i\0\t\1\l\f\b\5\l\6\p\6\5\3\1\v\r\p\7\l\z\o\6\t\b\8\c\b\7\j\e\0\v\t\r\h\f\b\t\h\0\w\m\w\1\m\h\z\s\q\d\t\f\e\3\e\t\p\m\2\g\3\g\i\7\v\t\u\s\l\z\m\c\j\u\f\8\z\7\w\t\e\n\9\5\q\6\7\r\v\5\l\v\a\i\j\8\9\r\w\u\2\y\q\8\0\v\p\5\t\d\r\i\u\y\6\1\8\9\j\w\t\6\9\b\6\x\j\m\x\s\b\h\7\5\8\x\l\f\c\f\n\0\a\1\k\w\5\z\c\n\b\v\u\g\e\o\y\j\4\z\j\b\o\w\h\n\7\5\j\0\4\u\l\z\g\2\v\3\3\v\8\l\k\c\5\0\i\t\3\s\j\d\z\z\2\u\t\g\6\t\o\w\0\8\m\f\o\o\f\9\b\k\v\c\z\7\y\y\4\t\u\z\o\0\y\k\q\p\r\n\k\o\f\p\t\l\2\3\u\w\w\h\r\d\o\c\3\x\q\i\a\0\e\n\n\c\f\x\7\n\w\m\e\s\m\9\v\q\b\f\s\t\0\5\n\c\f\y\0\m\w\z\4\s\8\2\h\r\1\b\y\6\c\v\o\s\b\s\n\e\2\l\i\s\d\5\k\8\7\n\y\a\u\a\x\7\s\4\e\o\r\i\w\y\c\2\p\j\j\t\g\o\2\w\q\c\j\b\7\b\j\b\i\x\n\n\1\9\e\7\9\p\n\i\z\c\g\n\2\a\r\a\0\h\t\p\w\d\1\8\o\0\9\h\3\b\i\y\m\v\g\1\0\c\8\a\a\9\z\g\y\d\c\h\o\q\z\i\m\a\b\6\b\0\1\n\g\5\a\q\9\h\1\2\h\o\j\r\w\0\0\z\r\0\k\f\u\o\t\s\l\4\v\s\g\w\6\h\q\v\k\p\8\n\i\k\n\j\u\f\c\x\u\l\z\x\l\y\8\w\v\u\9\s\c\7\q\q\i\j\p\j\4\f\7\k\x\8\m\t\h\w\e\d\6\0\f\7\3\5\p\d\k\o\x\f\i\3\1\f\o\m\8\f\5\u\2\4\x\i\f\z\b\0\e\4\p\n\i\p\i\h\1\m\l\1\0\9\g\4\2\0\0\v\b\5\3\a\m\f\v\3\y\t\8\g\d\a\6\5\z\0\e\2\6\v\3\w\1\4\5\9\o\0\q\w\c\2\5\e\r\d\2\p\1\w\w\n\4\u\e\2\a\d\1\w\a\4\d\e\q\w\p\5\o\9\i\m\h\0\8\v\9\h\d\a\b\2\j\s\j\j\z\v\h\3\9\6\h\j\j\w\2\6\i\6\t\z\f\6\t\a\g\8\n\v\2\v\g\1\y\a\z\c\o\r\k\1\v\r\m\q\g\d\p\q\s\b\6\k\z\1\4\x\t\f\a\y\0\j\2\t\0\6\s\u\5\s\s\v\2\m\6\d\m\u\z\t\r\z\8\t\s\h\v\i\f\x\9\2\4\e\4\d\e\h\j\b\t\w\8\z\i\d\8\0\h\r\k\1\d\7\w\n\w\a\8\e\4\h\8\v\h\5\r\b\3\j\p\k\o\r\0\0\q\z\h\j\e\k\v\v\t\5\o\c\5\7\2\0\z\5\k\7\q\2\c\z\r\y\i\n\8\1\w\r\a\b\k\2\0\r\7\j\z\n\4\5\f\m\h\u\q\s\m\c\m\3\g\p\o\3\a\6\u\2\i\w\o\x\6\6\u\s\2\h\v\b\l\c\0\k\c\l\4\t\p\a\o\7\i\h\6\d\v\j\3\7\s\z\u\a\z\s\w\h\3\g\s\m\j\p\d\g\2\v\7\0\o\0\5\f\b\5\p\n\2\1\h\i\8\3\a\2\w\3\g\u\h\v\8\q\u\o\6\h\0\5\m\t\q\8\7\a\j\2\f\1\h\q\b\8\j\d\1\b\4\3\8\r\h\s\f\b\x\m\c\h\7\6\5\5\t\s\u\u\e\t\i\n\o\w\x\s\0\4\b\z\u\s\9\w\6\n\8\d\1\9\r\h\v\v\j\4\d\r\4\o\b\k\v\b\s\v\9\s\b\9\l\0\x\7\8\d\2\t\9\2\g\0\9\0\c\s\r\g\r\u\a\x\8\r\7\l\h\6\n\6\j\k\s\4\a\b\0\p\u\1\5\c\6\1\l\6\3\v\z\o\1\e\t\f\j\7\1\g\y\6\v\2\w\q\9\d\0\1\o\i\7\r\3\m\q\3\n\3\e\w\k\y\v\9\5\j\b\c\q\k\w\y\h\b\r\s\6\8\l\8\3\r\d\t\5\d\y\4\t\o\p\y\g\u\y\8\5\7\9\x\c\d\c\l\l\2\p\3\g\e\w\u\e\v\o\w\l\g\u\x\x\v\7\6\r\0\s\0\n\0\5\g\a\b\q\c\k\0\z\8\h\3\q\k\8\t\o\g\4\0\c\z\4\5\w\7\o\p\k\n\x\i\4\q\o\j\h\1\i\e\e\p\p\a\m\4\h\a\f\2\j\f\l\w\y\p\j\z\y\d\t\k\h\j\p\y\1\w\g\o\7\i\0\t\d\b\y\n\j\f\z\4\v\4\g\h\c\0\2\s\s\t\g\s\q\h\9\s\s\t\t\f\4\b\9\i\n\3\l\l\e\e\7\f\v\1\i\7\a\c\k\b\d\8\h\k\x\y\l\0\o\l\f\s\r\9\q\h\3\s\3\c\u\8\s\j\g\9\m\u\b\q\0\w\c\s\6\r\m\m\h\d\h\c\m\5\j\k\8\h\3\j\i\6\2\2\h\4\i\6\o\4\a\g\q\m\z\z\u\i\3\6\d\c\e\5\v\p\b\0\1\p\2\p\w\2\f\s\y\o\7\t\u\1\g\s\i\k\1\u\7\7\n\s\0\8\a\u\d\c\f\f\9\0\2\l\1\c\c\g\n\d\6\c\l\o\6\1\1\o\7\g\3\l\5\f\7\p\7\w\e\8\p\t\n\2\x\q\l\9\7\n\d\9\k\c\e\m\9\q\c\c\u\6\k\j\b\5\m\g\n\4\z\u\r\3\6\y\i\a\7\o\t\x\y\l\a\i\8\w\v\y\j\l\g\f\v\7\f\0\z\2\v\c\k\7\2\6\c\f\u\o\0\a\s\j\d\7\9\1\i\1\n\5\s\c\2\b\u\l\j\l\b\w\y\b\1\y\f\f\2\6\x\e\h\r\r\9\n\l\q\w\e\1\j\g\k\l\j\f\o\5\9\2\s\k\9\v\r\0\i\p\6\q\i\k\0\8\s\d\h\f\w\9\u\h\b\h\x\s\u\4\a\x\4\5\s\u\k\f\f\4\f\r\k\0\7\2\n\4\s\t\m\6\m\7\c\a\f\d\u\z\u\1\2\r\p\l\l\v\a\v\g\s\f\8\3\9\7\5\t\n\8\z\e\u\l\z\m\y\c\e\e\i\8\f\y\f\p\k\l\m\9\d\f\8\m\v\i\1\m\1\v\3\n\c\f\3\c\o\t\w\r\z\c\2\e\x\u\z\z\4\7\i\i\0\c\w\s\i\a\n\v\3\w\k\d\0\6\i\n\m\v\6\1\f\f\a\s\i\p\w\7\m\m\j\n\j\8\0\t\3\z\g\c\r\y\g\7\d\q\4\b\c\r\a\i\p\r\z\y\9\6\m\l\a\6\r\t\h\e\f\e\c\8\i\b\f\w\2\l\p\i\f\y\s\5\1\7\g\k\w\s\z\b\p\3\u\o\t\x\8\r\7\s\g\9\9\l\p\a\i\z\x\i\j\4\p\o\r\z\7\s\0\w\g\7\6\3\a\c\k\f\h\5\f\k\6\4\y\j\t\l\b\e\m\a\n\n\d\z\3\n\j\d\s\9\q\m\b\e\b\t\k\f\z\t\f\h\b\b\j\h\6\7\c\t\w\i\p\k\6\e\n\d\z\n\4\7\0\e\4\h\8\e\x\1\8\i\l\8\e\m\i\o\m\x\a\l\b\x\4\i\u\1\p\g\e\l\h\8\1\s\q\2\i\i\f\h\s\j\s\9\q\g\m\t\x\w\k\o\l\x\6\t\h\3\n\b\f\8\p\u\v\o\i\0\s\l\h\z\l\9\1\u\m\x\c\d\c\8\9\9\3\x\1\b\k\x\u\e\1\b\8\5\e\y\i\3\9\u\z\9\t\b\u\4\4\d\z\j\7\8\1\n\2\u\e\a\a\k\q\x\d\o\w\5\h\2\u\g\6\2\g\r\v\6\3\2\9\c\u\l\s\t\x\v\u\9\d\i\v\c\4\8\v\h\w\3\t\n\o\o\d\w\t\k\v\i\e\3\o\r\b\9\d\e\4\z\9\p\g\y\6\z\j\9\j\q\t\p\t\r\r\l\w\d\9\i\9\d\v\5\v\h\h\q\m\k\k\7\c\6\3\3\0\h\8\d\q\j\u\7\j\z\9\r\v\n\m\l\b\m\f\r\0\t\z\m\2\g\w\z\k\i\c\j\5\e\8\q\6\z\x\5\g\9\0\v\f\6\c\2\t\m\b\2\g\t\h\k\0\l\f\s\u\3\o\e\2\f\x\s\5\m\w\8\b\p\p\g\a\c\m\f\k\z\k\v\s\e\y\g\t\a\h\j\t\z\5\9\d\t\7\d\0\v\g\c\z\t\1\w\d\9\g\x\m\c\m\i\e\0\v\p\5\n\v\m\r\g\0\q\j\3\n\k\g\8\4\v\a\i\t\w\p\i\e\1\u\o\g\6\k\h\q\4\4\l\1\n\f\4\x\5\k\h\7\n\t\e\3\c\c\a\0\w\l\g\z\0\w\5\b\0\m\d\m\t\m\j\b\v\y\9\q\n\e\4\e\b\z\5\j\5\7\o\h\8\4\p\h\n\o\7\k\c\y\t\s\l\m\e\3\r\a\v\o\c\a\u\t\y\n\x\p\e\m\2\3\4\j\u\d\4\6\t\f\x\4\z\f\c\n\f\b\x\g\j\1\q\b\c\0\6\k\o\g\v\h\l\o\r\z\9\t\6\6\d\3\6\i\7\v\i\7\m\o\g\i\p\p\d\s\2\o\j\n\e\q\m\f\6\f\w\z\4\2\6\x\i\r\g\7\z\j\o\6\5\0\0\y\f\6\h\9\4\v\t\m\e\o\h\y\4\o\s\w\j\r\n\9\b\w\z\7\u\x\f\o\4\8\u\1\0\w\0\8\5\b\8\c\3\k\c\6\r\0\v\2\v\w\s\p\v\1\7\f\f\u\w\c\r\5\4\b\t\3\c\f\4\r\a\9\f\o\x\y\t\y\r\n\9\s\f\l\v\q\l\y\i\r\v\d\l\d\o\l\c\4\p\g\2\r\1\1\3\j\v\q\i\8\e\3\9\1\d\h\t\4\s\2\r\4\r\k\x\p\d\l\u\7\m\b\j\j\i\6\m\x\0\0\c\f\h\z\a\h\p\7\6\2\7\p\i\4\8\s\a\e\b\7\u\b\2\5\p\u\s\v\u\l\a\9\6\z\j\y\q\b\j\w\s\l\a\p\j\4\e\f\w\j\1\r\y\7\4\i\c\g\p\i\3\o\1\a\9\1\p\o\g\w\h\l\1\c\c\u\z\2\p\8\t\n\a\1\6\j\q\i\0\a\l\7\r\j\6\x\3\4\d\k\8\p\9\k\i\h\5\b\i\l\2\t\b\t\r\l\8\y\y\u\a\r\j\1\w\n\k\7\j\s\2\5\i\l\u\9\g\y\l\x\6\f\q\3\g\g\s\t\q\d\z\a\k\e\r\x\j\8\3\m\a\3\e\2\e\0\u\k\4\r\6\6\b\s\m\q\2\1\y\c\a\b\5\i\8\a\f\u\c\x\k\8\q\5\n\k\c\a\8\o\7\c\6\g\b\a\a\k\j\k\0\l\b\q\7\9\y\y\8\x\4\8\p\t\g\2\t\v\7\w\e\2\g\e\p\c\i\y\4\y\n\a\d\o\z\q\e\t\z\0\2\f\7\t\4\u\m\v\m\c\j\c\z\v\d\l\m\d\o\r\r\9\v\h\a\5\h\z\7\w\k\2\p\r\p\t\d\h\6\9\9\r\o\i\8\y\i\m\1\g\y\6\2\o\d\u\j\o\f\v\p\t\1\5\c\p\v\1\n\8\7\1\m\p\w\w\w\1\9\y\s\2\6\5\s\e\p\5\k\6\k\b\w\8\2\w\9\z\8\k\s\k\0\s\7\m\8\w\2\o\i\n\b\b\n\6\x\t\o\l\h\u\1\m\t\b\c\x\p\q\h\a\m\t\r\x\o\7\t\l\y\8\d\5\q\f\4\8\r ]] 00:09:02.193 00:09:02.193 real 0m1.287s 00:09:02.193 user 0m0.866s 00:09:02.193 sys 0m0.626s 00:09:02.193 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.193 17:10:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:02.453 17:10:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:02.453 [2024-11-04 17:10:03.071927] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:02.453 [2024-11-04 17:10:03.072035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 00:09:02.453 { 00:09:02.453 "subsystems": [ 00:09:02.453 { 00:09:02.453 "subsystem": "bdev", 00:09:02.453 "config": [ 00:09:02.453 { 00:09:02.453 "params": { 00:09:02.453 "trtype": "pcie", 00:09:02.453 "traddr": "0000:00:10.0", 00:09:02.453 "name": "Nvme0" 00:09:02.453 }, 00:09:02.453 "method": "bdev_nvme_attach_controller" 00:09:02.453 }, 00:09:02.453 { 00:09:02.453 "method": "bdev_wait_for_examine" 00:09:02.453 } 00:09:02.453 ] 00:09:02.453 } 00:09:02.453 ] 00:09:02.453 } 00:09:02.453 [2024-11-04 17:10:03.221948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.712 [2024-11-04 17:10:03.288347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.712 [2024-11-04 17:10:03.346607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.712  [2024-11-04T17:10:03.775Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:02.971 00:09:02.971 17:10:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:02.971 ************************************ 00:09:02.971 END TEST spdk_dd_basic_rw 00:09:02.971 ************************************ 00:09:02.971 00:09:02.971 real 0m17.546s 00:09:02.971 user 0m12.408s 00:09:02.971 sys 0m6.828s 00:09:02.971 17:10:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.971 17:10:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 17:10:03 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:02.971 17:10:03 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:02.971 17:10:03 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.971 17:10:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:02.971 ************************************ 00:09:02.971 START TEST spdk_dd_posix 00:09:02.971 ************************************ 00:09:02.971 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:03.230 * Looking for test storage... 00:09:03.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.230 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:03.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.231 --rc genhtml_branch_coverage=1 00:09:03.231 --rc genhtml_function_coverage=1 00:09:03.231 --rc genhtml_legend=1 00:09:03.231 --rc geninfo_all_blocks=1 00:09:03.231 --rc geninfo_unexecuted_blocks=1 00:09:03.231 00:09:03.231 ' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:03.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.231 --rc genhtml_branch_coverage=1 00:09:03.231 --rc genhtml_function_coverage=1 00:09:03.231 --rc genhtml_legend=1 00:09:03.231 --rc geninfo_all_blocks=1 00:09:03.231 --rc geninfo_unexecuted_blocks=1 00:09:03.231 00:09:03.231 ' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:03.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.231 --rc genhtml_branch_coverage=1 00:09:03.231 --rc genhtml_function_coverage=1 00:09:03.231 --rc genhtml_legend=1 00:09:03.231 --rc geninfo_all_blocks=1 00:09:03.231 --rc geninfo_unexecuted_blocks=1 00:09:03.231 00:09:03.231 ' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:03.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.231 --rc genhtml_branch_coverage=1 00:09:03.231 --rc genhtml_function_coverage=1 00:09:03.231 --rc genhtml_legend=1 00:09:03.231 --rc geninfo_all_blocks=1 00:09:03.231 --rc geninfo_unexecuted_blocks=1 00:09:03.231 00:09:03.231 ' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:09:03.231 * First test run, liburing in use 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:03.231 ************************************ 00:09:03.231 START TEST dd_flag_append 00:09:03.231 ************************************ 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=r1fx8y8nkq2bergnb3kh2fj02g2yzife 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=4cv62vulkboe26qzwd66n8i2ey81p51q 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s r1fx8y8nkq2bergnb3kh2fj02g2yzife 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 4cv62vulkboe26qzwd66n8i2ey81p51q 00:09:03.231 17:10:03 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:03.231 [2024-11-04 17:10:03.980997] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:03.231 [2024-11-04 17:10:03.981301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60103 ] 00:09:03.490 [2024-11-04 17:10:04.128315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.490 [2024-11-04 17:10:04.184472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.490 [2024-11-04 17:10:04.244453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.490  [2024-11-04T17:10:04.554Z] Copying: 32/32 [B] (average 31 kBps) 00:09:03.750 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 4cv62vulkboe26qzwd66n8i2ey81p51qr1fx8y8nkq2bergnb3kh2fj02g2yzife == \4\c\v\6\2\v\u\l\k\b\o\e\2\6\q\z\w\d\6\6\n\8\i\2\e\y\8\1\p\5\1\q\r\1\f\x\8\y\8\n\k\q\2\b\e\r\g\n\b\3\k\h\2\f\j\0\2\g\2\y\z\i\f\e ]] 00:09:03.750 00:09:03.750 real 0m0.550s 00:09:03.750 user 0m0.291s 00:09:03.750 sys 0m0.291s 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.750 ************************************ 00:09:03.750 END TEST dd_flag_append 00:09:03.750 ************************************ 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 ************************************ 00:09:03.750 START TEST dd_flag_directory 00:09:03.750 ************************************ 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:03.750 17:10:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:04.010 [2024-11-04 17:10:04.587774] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:04.010 [2024-11-04 17:10:04.587877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:09:04.010 [2024-11-04 17:10:04.736913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.010 [2024-11-04 17:10:04.801441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.269 [2024-11-04 17:10:04.861299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.269 [2024-11-04 17:10:04.899091] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:04.269 [2024-11-04 17:10:04.899147] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:04.269 [2024-11-04 17:10:04.899165] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.269 [2024-11-04 17:10:05.020105] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:04.528 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:09:04.528 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:04.528 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:09:04.528 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:09:04.528 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:09:04.528 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:04.529 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:04.529 [2024-11-04 17:10:05.141619] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:04.529 [2024-11-04 17:10:05.141879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60141 ] 00:09:04.529 [2024-11-04 17:10:05.287809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.788 [2024-11-04 17:10:05.354121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.788 [2024-11-04 17:10:05.408818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.789 [2024-11-04 17:10:05.442926] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:04.789 [2024-11-04 17:10:05.442981] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:04.789 [2024-11-04 17:10:05.443015] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.789 [2024-11-04 17:10:05.556919] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:05.049 00:09:05.049 real 0m1.095s 00:09:05.049 user 0m0.603s 00:09:05.049 sys 0m0.280s 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:09:05.049 ************************************ 00:09:05.049 END TEST dd_flag_directory 00:09:05.049 ************************************ 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:05.049 ************************************ 00:09:05.049 START TEST dd_flag_nofollow 00:09:05.049 ************************************ 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.049 17:10:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:05.049 [2024-11-04 17:10:05.744239] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:05.049 [2024-11-04 17:10:05.744333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60175 ] 00:09:05.309 [2024-11-04 17:10:05.888587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.309 [2024-11-04 17:10:05.947752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.309 [2024-11-04 17:10:06.005322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.309 [2024-11-04 17:10:06.044980] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:05.309 [2024-11-04 17:10:06.045052] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:05.309 [2024-11-04 17:10:06.045072] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.569 [2024-11-04 17:10:06.169951] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:05.569 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:05.569 [2024-11-04 17:10:06.309050] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:05.569 [2024-11-04 17:10:06.309316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60179 ] 00:09:05.828 [2024-11-04 17:10:06.457065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.828 [2024-11-04 17:10:06.511155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.828 [2024-11-04 17:10:06.569172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.828 [2024-11-04 17:10:06.605974] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:05.828 [2024-11-04 17:10:06.606048] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:05.828 [2024-11-04 17:10:06.606099] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.088 [2024-11-04 17:10:06.721793] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:06.088 17:10:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:06.088 [2024-11-04 17:10:06.830647] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:06.088 [2024-11-04 17:10:06.830739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60192 ] 00:09:06.353 [2024-11-04 17:10:06.968482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.353 [2024-11-04 17:10:07.024358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.353 [2024-11-04 17:10:07.077852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.353  [2024-11-04T17:10:07.426Z] Copying: 512/512 [B] (average 500 kBps) 00:09:06.622 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ m715s98l0vtuspiqgd4o11vdv3ydkvhc99gmeawlitj3vhywza4oj2n5cj54fk77e2tzpps8bzruv1lq7lot1auvk9tc01oxzsppjqgas3gnmo7bt0favjo5dm3f4ttzwslp3y1s6bqs1b6u2d5qyvs4t3xyw0qc0szcvyxnjr774n78nh6tzhlzdbonzx9haj5xxkrt8no3le8g84iuw2bgwglxdqt660f0fkgiwaj0lxs9zpqfuz586p972idujxlb81b4x6dyki642v88b85c9y2e4zu7t3hpvigc9oa9fixx7ez03h2mkdw29id32o8ybsfikk8lnycjn9ei4f79h31fb4xbrmvmr4uhdafantvb3370taxjedz2qtadaa0kkwc86mg3o4jd7uoskzz85ee4xluht1oknixj4bw8p2bc7zdn4dowwqf88jazmsw2gm581u5zyxvtv5d6ibns1g161t68a8dg09i2lexugvk2npmgxle4si56ak41 == \m\7\1\5\s\9\8\l\0\v\t\u\s\p\i\q\g\d\4\o\1\1\v\d\v\3\y\d\k\v\h\c\9\9\g\m\e\a\w\l\i\t\j\3\v\h\y\w\z\a\4\o\j\2\n\5\c\j\5\4\f\k\7\7\e\2\t\z\p\p\s\8\b\z\r\u\v\1\l\q\7\l\o\t\1\a\u\v\k\9\t\c\0\1\o\x\z\s\p\p\j\q\g\a\s\3\g\n\m\o\7\b\t\0\f\a\v\j\o\5\d\m\3\f\4\t\t\z\w\s\l\p\3\y\1\s\6\b\q\s\1\b\6\u\2\d\5\q\y\v\s\4\t\3\x\y\w\0\q\c\0\s\z\c\v\y\x\n\j\r\7\7\4\n\7\8\n\h\6\t\z\h\l\z\d\b\o\n\z\x\9\h\a\j\5\x\x\k\r\t\8\n\o\3\l\e\8\g\8\4\i\u\w\2\b\g\w\g\l\x\d\q\t\6\6\0\f\0\f\k\g\i\w\a\j\0\l\x\s\9\z\p\q\f\u\z\5\8\6\p\9\7\2\i\d\u\j\x\l\b\8\1\b\4\x\6\d\y\k\i\6\4\2\v\8\8\b\8\5\c\9\y\2\e\4\z\u\7\t\3\h\p\v\i\g\c\9\o\a\9\f\i\x\x\7\e\z\0\3\h\2\m\k\d\w\2\9\i\d\3\2\o\8\y\b\s\f\i\k\k\8\l\n\y\c\j\n\9\e\i\4\f\7\9\h\3\1\f\b\4\x\b\r\m\v\m\r\4\u\h\d\a\f\a\n\t\v\b\3\3\7\0\t\a\x\j\e\d\z\2\q\t\a\d\a\a\0\k\k\w\c\8\6\m\g\3\o\4\j\d\7\u\o\s\k\z\z\8\5\e\e\4\x\l\u\h\t\1\o\k\n\i\x\j\4\b\w\8\p\2\b\c\7\z\d\n\4\d\o\w\w\q\f\8\8\j\a\z\m\s\w\2\g\m\5\8\1\u\5\z\y\x\v\t\v\5\d\6\i\b\n\s\1\g\1\6\1\t\6\8\a\8\d\g\0\9\i\2\l\e\x\u\g\v\k\2\n\p\m\g\x\l\e\4\s\i\5\6\a\k\4\1 ]] 00:09:06.622 00:09:06.622 real 0m1.629s 00:09:06.622 user 0m0.877s 00:09:06.622 sys 0m0.560s 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:06.622 ************************************ 00:09:06.622 END TEST dd_flag_nofollow 00:09:06.622 ************************************ 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 ************************************ 00:09:06.622 START TEST dd_flag_noatime 00:09:06.622 ************************************ 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1730740207 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1730740207 00:09:06.622 17:10:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:09:07.999 17:10:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:07.999 [2024-11-04 17:10:08.433448] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:07.999 [2024-11-04 17:10:08.433767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60236 ] 00:09:07.999 [2024-11-04 17:10:08.587032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.999 [2024-11-04 17:10:08.658645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.999 [2024-11-04 17:10:08.717482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.999  [2024-11-04T17:10:09.062Z] Copying: 512/512 [B] (average 500 kBps) 00:09:08.258 00:09:08.258 17:10:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.258 17:10:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1730740207 )) 00:09:08.258 17:10:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.258 17:10:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1730740207 )) 00:09:08.258 17:10:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:08.258 [2024-11-04 17:10:08.999817] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:08.258 [2024-11-04 17:10:08.999917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60249 ] 00:09:08.517 [2024-11-04 17:10:09.147477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.517 [2024-11-04 17:10:09.203399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.517 [2024-11-04 17:10:09.260938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.517  [2024-11-04T17:10:09.580Z] Copying: 512/512 [B] (average 500 kBps) 00:09:08.776 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:08.776 ************************************ 00:09:08.776 END TEST dd_flag_noatime 00:09:08.776 ************************************ 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1730740209 )) 00:09:08.776 00:09:08.776 real 0m2.138s 00:09:08.776 user 0m0.612s 00:09:08.776 sys 0m0.588s 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:08.776 ************************************ 00:09:08.776 START TEST dd_flags_misc 00:09:08.776 ************************************ 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:08.776 17:10:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:09.035 [2024-11-04 17:10:09.600498] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:09.035 [2024-11-04 17:10:09.600738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60282 ] 00:09:09.035 [2024-11-04 17:10:09.739616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.035 [2024-11-04 17:10:09.800833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.294 [2024-11-04 17:10:09.857143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.294  [2024-11-04T17:10:10.098Z] Copying: 512/512 [B] (average 500 kBps) 00:09:09.294 00:09:09.294 17:10:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a8oa17yq43889knrqeb3t9anqery8l3t0k7zk5za6odcwzfa9nta8wn9inbps8rq8ec29fvj085312exmf4ombsflc52x1g9zgiu38yc0ovyl3cm82uerk9jj55341h2y7eqa6vqykskrhzrjhtpk5a93or730omp052itzflw8eboenazgqsd43bqkw091thbhrxm4yrmb12e4rzjk7dwsyytwcobtekod6yo8gfbizcybldxxux6yep623i3piir7ylj4tpzl256rmlqpwaog4w4nlgfooaiy7bnx9eb8r6kdwpygqm6f3uxr7vet6v6d54kyae30i2hnjf52m92zhcv4pj659u4a3hhqdwoshhl5y5ypm6duy904ir68tleo9fz48l3wghqvfkyugj827w42yj303svosfnq5sg2cil9w9ovrctvd5x4j8a3gcsr7p9idy6d43exphmcwmtsckkzed7h656cvppuvxnugq65k3n2gclwvd1sul1da == \a\8\o\a\1\7\y\q\4\3\8\8\9\k\n\r\q\e\b\3\t\9\a\n\q\e\r\y\8\l\3\t\0\k\7\z\k\5\z\a\6\o\d\c\w\z\f\a\9\n\t\a\8\w\n\9\i\n\b\p\s\8\r\q\8\e\c\2\9\f\v\j\0\8\5\3\1\2\e\x\m\f\4\o\m\b\s\f\l\c\5\2\x\1\g\9\z\g\i\u\3\8\y\c\0\o\v\y\l\3\c\m\8\2\u\e\r\k\9\j\j\5\5\3\4\1\h\2\y\7\e\q\a\6\v\q\y\k\s\k\r\h\z\r\j\h\t\p\k\5\a\9\3\o\r\7\3\0\o\m\p\0\5\2\i\t\z\f\l\w\8\e\b\o\e\n\a\z\g\q\s\d\4\3\b\q\k\w\0\9\1\t\h\b\h\r\x\m\4\y\r\m\b\1\2\e\4\r\z\j\k\7\d\w\s\y\y\t\w\c\o\b\t\e\k\o\d\6\y\o\8\g\f\b\i\z\c\y\b\l\d\x\x\u\x\6\y\e\p\6\2\3\i\3\p\i\i\r\7\y\l\j\4\t\p\z\l\2\5\6\r\m\l\q\p\w\a\o\g\4\w\4\n\l\g\f\o\o\a\i\y\7\b\n\x\9\e\b\8\r\6\k\d\w\p\y\g\q\m\6\f\3\u\x\r\7\v\e\t\6\v\6\d\5\4\k\y\a\e\3\0\i\2\h\n\j\f\5\2\m\9\2\z\h\c\v\4\p\j\6\5\9\u\4\a\3\h\h\q\d\w\o\s\h\h\l\5\y\5\y\p\m\6\d\u\y\9\0\4\i\r\6\8\t\l\e\o\9\f\z\4\8\l\3\w\g\h\q\v\f\k\y\u\g\j\8\2\7\w\4\2\y\j\3\0\3\s\v\o\s\f\n\q\5\s\g\2\c\i\l\9\w\9\o\v\r\c\t\v\d\5\x\4\j\8\a\3\g\c\s\r\7\p\9\i\d\y\6\d\4\3\e\x\p\h\m\c\w\m\t\s\c\k\k\z\e\d\7\h\6\5\6\c\v\p\p\u\v\x\n\u\g\q\6\5\k\3\n\2\g\c\l\w\v\d\1\s\u\l\1\d\a ]] 00:09:09.294 17:10:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:09.294 17:10:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:09.553 [2024-11-04 17:10:10.138589] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:09.553 [2024-11-04 17:10:10.138705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60287 ] 00:09:09.553 [2024-11-04 17:10:10.285335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.553 [2024-11-04 17:10:10.335302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.812 [2024-11-04 17:10:10.394014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.812  [2024-11-04T17:10:10.876Z] Copying: 512/512 [B] (average 500 kBps) 00:09:10.072 00:09:10.072 17:10:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a8oa17yq43889knrqeb3t9anqery8l3t0k7zk5za6odcwzfa9nta8wn9inbps8rq8ec29fvj085312exmf4ombsflc52x1g9zgiu38yc0ovyl3cm82uerk9jj55341h2y7eqa6vqykskrhzrjhtpk5a93or730omp052itzflw8eboenazgqsd43bqkw091thbhrxm4yrmb12e4rzjk7dwsyytwcobtekod6yo8gfbizcybldxxux6yep623i3piir7ylj4tpzl256rmlqpwaog4w4nlgfooaiy7bnx9eb8r6kdwpygqm6f3uxr7vet6v6d54kyae30i2hnjf52m92zhcv4pj659u4a3hhqdwoshhl5y5ypm6duy904ir68tleo9fz48l3wghqvfkyugj827w42yj303svosfnq5sg2cil9w9ovrctvd5x4j8a3gcsr7p9idy6d43exphmcwmtsckkzed7h656cvppuvxnugq65k3n2gclwvd1sul1da == \a\8\o\a\1\7\y\q\4\3\8\8\9\k\n\r\q\e\b\3\t\9\a\n\q\e\r\y\8\l\3\t\0\k\7\z\k\5\z\a\6\o\d\c\w\z\f\a\9\n\t\a\8\w\n\9\i\n\b\p\s\8\r\q\8\e\c\2\9\f\v\j\0\8\5\3\1\2\e\x\m\f\4\o\m\b\s\f\l\c\5\2\x\1\g\9\z\g\i\u\3\8\y\c\0\o\v\y\l\3\c\m\8\2\u\e\r\k\9\j\j\5\5\3\4\1\h\2\y\7\e\q\a\6\v\q\y\k\s\k\r\h\z\r\j\h\t\p\k\5\a\9\3\o\r\7\3\0\o\m\p\0\5\2\i\t\z\f\l\w\8\e\b\o\e\n\a\z\g\q\s\d\4\3\b\q\k\w\0\9\1\t\h\b\h\r\x\m\4\y\r\m\b\1\2\e\4\r\z\j\k\7\d\w\s\y\y\t\w\c\o\b\t\e\k\o\d\6\y\o\8\g\f\b\i\z\c\y\b\l\d\x\x\u\x\6\y\e\p\6\2\3\i\3\p\i\i\r\7\y\l\j\4\t\p\z\l\2\5\6\r\m\l\q\p\w\a\o\g\4\w\4\n\l\g\f\o\o\a\i\y\7\b\n\x\9\e\b\8\r\6\k\d\w\p\y\g\q\m\6\f\3\u\x\r\7\v\e\t\6\v\6\d\5\4\k\y\a\e\3\0\i\2\h\n\j\f\5\2\m\9\2\z\h\c\v\4\p\j\6\5\9\u\4\a\3\h\h\q\d\w\o\s\h\h\l\5\y\5\y\p\m\6\d\u\y\9\0\4\i\r\6\8\t\l\e\o\9\f\z\4\8\l\3\w\g\h\q\v\f\k\y\u\g\j\8\2\7\w\4\2\y\j\3\0\3\s\v\o\s\f\n\q\5\s\g\2\c\i\l\9\w\9\o\v\r\c\t\v\d\5\x\4\j\8\a\3\g\c\s\r\7\p\9\i\d\y\6\d\4\3\e\x\p\h\m\c\w\m\t\s\c\k\k\z\e\d\7\h\6\5\6\c\v\p\p\u\v\x\n\u\g\q\6\5\k\3\n\2\g\c\l\w\v\d\1\s\u\l\1\d\a ]] 00:09:10.072 17:10:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:10.072 17:10:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:10.072 [2024-11-04 17:10:10.683196] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:10.072 [2024-11-04 17:10:10.683318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:09:10.072 [2024-11-04 17:10:10.830302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.331 [2024-11-04 17:10:10.894587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.331 [2024-11-04 17:10:10.952532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.331  [2024-11-04T17:10:11.394Z] Copying: 512/512 [B] (average 125 kBps) 00:09:10.590 00:09:10.590 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a8oa17yq43889knrqeb3t9anqery8l3t0k7zk5za6odcwzfa9nta8wn9inbps8rq8ec29fvj085312exmf4ombsflc52x1g9zgiu38yc0ovyl3cm82uerk9jj55341h2y7eqa6vqykskrhzrjhtpk5a93or730omp052itzflw8eboenazgqsd43bqkw091thbhrxm4yrmb12e4rzjk7dwsyytwcobtekod6yo8gfbizcybldxxux6yep623i3piir7ylj4tpzl256rmlqpwaog4w4nlgfooaiy7bnx9eb8r6kdwpygqm6f3uxr7vet6v6d54kyae30i2hnjf52m92zhcv4pj659u4a3hhqdwoshhl5y5ypm6duy904ir68tleo9fz48l3wghqvfkyugj827w42yj303svosfnq5sg2cil9w9ovrctvd5x4j8a3gcsr7p9idy6d43exphmcwmtsckkzed7h656cvppuvxnugq65k3n2gclwvd1sul1da == \a\8\o\a\1\7\y\q\4\3\8\8\9\k\n\r\q\e\b\3\t\9\a\n\q\e\r\y\8\l\3\t\0\k\7\z\k\5\z\a\6\o\d\c\w\z\f\a\9\n\t\a\8\w\n\9\i\n\b\p\s\8\r\q\8\e\c\2\9\f\v\j\0\8\5\3\1\2\e\x\m\f\4\o\m\b\s\f\l\c\5\2\x\1\g\9\z\g\i\u\3\8\y\c\0\o\v\y\l\3\c\m\8\2\u\e\r\k\9\j\j\5\5\3\4\1\h\2\y\7\e\q\a\6\v\q\y\k\s\k\r\h\z\r\j\h\t\p\k\5\a\9\3\o\r\7\3\0\o\m\p\0\5\2\i\t\z\f\l\w\8\e\b\o\e\n\a\z\g\q\s\d\4\3\b\q\k\w\0\9\1\t\h\b\h\r\x\m\4\y\r\m\b\1\2\e\4\r\z\j\k\7\d\w\s\y\y\t\w\c\o\b\t\e\k\o\d\6\y\o\8\g\f\b\i\z\c\y\b\l\d\x\x\u\x\6\y\e\p\6\2\3\i\3\p\i\i\r\7\y\l\j\4\t\p\z\l\2\5\6\r\m\l\q\p\w\a\o\g\4\w\4\n\l\g\f\o\o\a\i\y\7\b\n\x\9\e\b\8\r\6\k\d\w\p\y\g\q\m\6\f\3\u\x\r\7\v\e\t\6\v\6\d\5\4\k\y\a\e\3\0\i\2\h\n\j\f\5\2\m\9\2\z\h\c\v\4\p\j\6\5\9\u\4\a\3\h\h\q\d\w\o\s\h\h\l\5\y\5\y\p\m\6\d\u\y\9\0\4\i\r\6\8\t\l\e\o\9\f\z\4\8\l\3\w\g\h\q\v\f\k\y\u\g\j\8\2\7\w\4\2\y\j\3\0\3\s\v\o\s\f\n\q\5\s\g\2\c\i\l\9\w\9\o\v\r\c\t\v\d\5\x\4\j\8\a\3\g\c\s\r\7\p\9\i\d\y\6\d\4\3\e\x\p\h\m\c\w\m\t\s\c\k\k\z\e\d\7\h\6\5\6\c\v\p\p\u\v\x\n\u\g\q\6\5\k\3\n\2\g\c\l\w\v\d\1\s\u\l\1\d\a ]] 00:09:10.590 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:10.590 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:10.590 [2024-11-04 17:10:11.256200] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:10.590 [2024-11-04 17:10:11.256324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60306 ] 00:09:10.849 [2024-11-04 17:10:11.403031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.849 [2024-11-04 17:10:11.468829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.849 [2024-11-04 17:10:11.522525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.849  [2024-11-04T17:10:11.919Z] Copying: 512/512 [B] (average 250 kBps) 00:09:11.115 00:09:11.115 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ a8oa17yq43889knrqeb3t9anqery8l3t0k7zk5za6odcwzfa9nta8wn9inbps8rq8ec29fvj085312exmf4ombsflc52x1g9zgiu38yc0ovyl3cm82uerk9jj55341h2y7eqa6vqykskrhzrjhtpk5a93or730omp052itzflw8eboenazgqsd43bqkw091thbhrxm4yrmb12e4rzjk7dwsyytwcobtekod6yo8gfbizcybldxxux6yep623i3piir7ylj4tpzl256rmlqpwaog4w4nlgfooaiy7bnx9eb8r6kdwpygqm6f3uxr7vet6v6d54kyae30i2hnjf52m92zhcv4pj659u4a3hhqdwoshhl5y5ypm6duy904ir68tleo9fz48l3wghqvfkyugj827w42yj303svosfnq5sg2cil9w9ovrctvd5x4j8a3gcsr7p9idy6d43exphmcwmtsckkzed7h656cvppuvxnugq65k3n2gclwvd1sul1da == \a\8\o\a\1\7\y\q\4\3\8\8\9\k\n\r\q\e\b\3\t\9\a\n\q\e\r\y\8\l\3\t\0\k\7\z\k\5\z\a\6\o\d\c\w\z\f\a\9\n\t\a\8\w\n\9\i\n\b\p\s\8\r\q\8\e\c\2\9\f\v\j\0\8\5\3\1\2\e\x\m\f\4\o\m\b\s\f\l\c\5\2\x\1\g\9\z\g\i\u\3\8\y\c\0\o\v\y\l\3\c\m\8\2\u\e\r\k\9\j\j\5\5\3\4\1\h\2\y\7\e\q\a\6\v\q\y\k\s\k\r\h\z\r\j\h\t\p\k\5\a\9\3\o\r\7\3\0\o\m\p\0\5\2\i\t\z\f\l\w\8\e\b\o\e\n\a\z\g\q\s\d\4\3\b\q\k\w\0\9\1\t\h\b\h\r\x\m\4\y\r\m\b\1\2\e\4\r\z\j\k\7\d\w\s\y\y\t\w\c\o\b\t\e\k\o\d\6\y\o\8\g\f\b\i\z\c\y\b\l\d\x\x\u\x\6\y\e\p\6\2\3\i\3\p\i\i\r\7\y\l\j\4\t\p\z\l\2\5\6\r\m\l\q\p\w\a\o\g\4\w\4\n\l\g\f\o\o\a\i\y\7\b\n\x\9\e\b\8\r\6\k\d\w\p\y\g\q\m\6\f\3\u\x\r\7\v\e\t\6\v\6\d\5\4\k\y\a\e\3\0\i\2\h\n\j\f\5\2\m\9\2\z\h\c\v\4\p\j\6\5\9\u\4\a\3\h\h\q\d\w\o\s\h\h\l\5\y\5\y\p\m\6\d\u\y\9\0\4\i\r\6\8\t\l\e\o\9\f\z\4\8\l\3\w\g\h\q\v\f\k\y\u\g\j\8\2\7\w\4\2\y\j\3\0\3\s\v\o\s\f\n\q\5\s\g\2\c\i\l\9\w\9\o\v\r\c\t\v\d\5\x\4\j\8\a\3\g\c\s\r\7\p\9\i\d\y\6\d\4\3\e\x\p\h\m\c\w\m\t\s\c\k\k\z\e\d\7\h\6\5\6\c\v\p\p\u\v\x\n\u\g\q\6\5\k\3\n\2\g\c\l\w\v\d\1\s\u\l\1\d\a ]] 00:09:11.115 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:11.115 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:11.115 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:11.115 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:11.115 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:11.115 17:10:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:11.115 [2024-11-04 17:10:11.806464] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:11.115 [2024-11-04 17:10:11.806565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60321 ] 00:09:11.384 [2024-11-04 17:10:11.951684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.384 [2024-11-04 17:10:12.011882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.384 [2024-11-04 17:10:12.067837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.384  [2024-11-04T17:10:12.448Z] Copying: 512/512 [B] (average 500 kBps) 00:09:11.644 00:09:11.644 17:10:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ghwngawrpwfeggmb0li7x2fvjifyi94jojdzw5yrqx9t6x9lq3aobv46eoxtbnzj4vms8kq76f1fsy4pfbxqb4wcly25seb8nx6kykw1ertxh8ttuyl2nm3kvquj2nb5ofvuigkxwsava9y1y0y35phhfb77ma72p3hxrj2i5ozdsyaymajmmt35a1j5y129dehs2e5ep1dsqeb2whyu4465uqkh7g7vn56id3w0xa3lfvm8tvepz4wcyeizwk8vf39du4zgqmij8hps4mz3nvtv7ak79tdzjw7mnbdnqb34l8xjnme03567hfmildfdfo9019yafrcw1m76gl0y16dzvvq5edo6bmohdovg6ma48euafmxcvkt2wag6rv3k38leq84h58g9laqdzex5m58b72kzsnoi70bf5qgigekyjg69gdywwa0fg6j42d9qshz4d63m58912gp9epkmbk9n9ehft6e7mtsnu4p9m44z94k1ly6krldtc4b7gjlb == \g\h\w\n\g\a\w\r\p\w\f\e\g\g\m\b\0\l\i\7\x\2\f\v\j\i\f\y\i\9\4\j\o\j\d\z\w\5\y\r\q\x\9\t\6\x\9\l\q\3\a\o\b\v\4\6\e\o\x\t\b\n\z\j\4\v\m\s\8\k\q\7\6\f\1\f\s\y\4\p\f\b\x\q\b\4\w\c\l\y\2\5\s\e\b\8\n\x\6\k\y\k\w\1\e\r\t\x\h\8\t\t\u\y\l\2\n\m\3\k\v\q\u\j\2\n\b\5\o\f\v\u\i\g\k\x\w\s\a\v\a\9\y\1\y\0\y\3\5\p\h\h\f\b\7\7\m\a\7\2\p\3\h\x\r\j\2\i\5\o\z\d\s\y\a\y\m\a\j\m\m\t\3\5\a\1\j\5\y\1\2\9\d\e\h\s\2\e\5\e\p\1\d\s\q\e\b\2\w\h\y\u\4\4\6\5\u\q\k\h\7\g\7\v\n\5\6\i\d\3\w\0\x\a\3\l\f\v\m\8\t\v\e\p\z\4\w\c\y\e\i\z\w\k\8\v\f\3\9\d\u\4\z\g\q\m\i\j\8\h\p\s\4\m\z\3\n\v\t\v\7\a\k\7\9\t\d\z\j\w\7\m\n\b\d\n\q\b\3\4\l\8\x\j\n\m\e\0\3\5\6\7\h\f\m\i\l\d\f\d\f\o\9\0\1\9\y\a\f\r\c\w\1\m\7\6\g\l\0\y\1\6\d\z\v\v\q\5\e\d\o\6\b\m\o\h\d\o\v\g\6\m\a\4\8\e\u\a\f\m\x\c\v\k\t\2\w\a\g\6\r\v\3\k\3\8\l\e\q\8\4\h\5\8\g\9\l\a\q\d\z\e\x\5\m\5\8\b\7\2\k\z\s\n\o\i\7\0\b\f\5\q\g\i\g\e\k\y\j\g\6\9\g\d\y\w\w\a\0\f\g\6\j\4\2\d\9\q\s\h\z\4\d\6\3\m\5\8\9\1\2\g\p\9\e\p\k\m\b\k\9\n\9\e\h\f\t\6\e\7\m\t\s\n\u\4\p\9\m\4\4\z\9\4\k\1\l\y\6\k\r\l\d\t\c\4\b\7\g\j\l\b ]] 00:09:11.644 17:10:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:11.644 17:10:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:11.644 [2024-11-04 17:10:12.350123] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:11.644 [2024-11-04 17:10:12.350270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60327 ] 00:09:11.904 [2024-11-04 17:10:12.494519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.904 [2024-11-04 17:10:12.546385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.904 [2024-11-04 17:10:12.600412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:11.904  [2024-11-04T17:10:12.967Z] Copying: 512/512 [B] (average 500 kBps) 00:09:12.163 00:09:12.163 17:10:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ghwngawrpwfeggmb0li7x2fvjifyi94jojdzw5yrqx9t6x9lq3aobv46eoxtbnzj4vms8kq76f1fsy4pfbxqb4wcly25seb8nx6kykw1ertxh8ttuyl2nm3kvquj2nb5ofvuigkxwsava9y1y0y35phhfb77ma72p3hxrj2i5ozdsyaymajmmt35a1j5y129dehs2e5ep1dsqeb2whyu4465uqkh7g7vn56id3w0xa3lfvm8tvepz4wcyeizwk8vf39du4zgqmij8hps4mz3nvtv7ak79tdzjw7mnbdnqb34l8xjnme03567hfmildfdfo9019yafrcw1m76gl0y16dzvvq5edo6bmohdovg6ma48euafmxcvkt2wag6rv3k38leq84h58g9laqdzex5m58b72kzsnoi70bf5qgigekyjg69gdywwa0fg6j42d9qshz4d63m58912gp9epkmbk9n9ehft6e7mtsnu4p9m44z94k1ly6krldtc4b7gjlb == \g\h\w\n\g\a\w\r\p\w\f\e\g\g\m\b\0\l\i\7\x\2\f\v\j\i\f\y\i\9\4\j\o\j\d\z\w\5\y\r\q\x\9\t\6\x\9\l\q\3\a\o\b\v\4\6\e\o\x\t\b\n\z\j\4\v\m\s\8\k\q\7\6\f\1\f\s\y\4\p\f\b\x\q\b\4\w\c\l\y\2\5\s\e\b\8\n\x\6\k\y\k\w\1\e\r\t\x\h\8\t\t\u\y\l\2\n\m\3\k\v\q\u\j\2\n\b\5\o\f\v\u\i\g\k\x\w\s\a\v\a\9\y\1\y\0\y\3\5\p\h\h\f\b\7\7\m\a\7\2\p\3\h\x\r\j\2\i\5\o\z\d\s\y\a\y\m\a\j\m\m\t\3\5\a\1\j\5\y\1\2\9\d\e\h\s\2\e\5\e\p\1\d\s\q\e\b\2\w\h\y\u\4\4\6\5\u\q\k\h\7\g\7\v\n\5\6\i\d\3\w\0\x\a\3\l\f\v\m\8\t\v\e\p\z\4\w\c\y\e\i\z\w\k\8\v\f\3\9\d\u\4\z\g\q\m\i\j\8\h\p\s\4\m\z\3\n\v\t\v\7\a\k\7\9\t\d\z\j\w\7\m\n\b\d\n\q\b\3\4\l\8\x\j\n\m\e\0\3\5\6\7\h\f\m\i\l\d\f\d\f\o\9\0\1\9\y\a\f\r\c\w\1\m\7\6\g\l\0\y\1\6\d\z\v\v\q\5\e\d\o\6\b\m\o\h\d\o\v\g\6\m\a\4\8\e\u\a\f\m\x\c\v\k\t\2\w\a\g\6\r\v\3\k\3\8\l\e\q\8\4\h\5\8\g\9\l\a\q\d\z\e\x\5\m\5\8\b\7\2\k\z\s\n\o\i\7\0\b\f\5\q\g\i\g\e\k\y\j\g\6\9\g\d\y\w\w\a\0\f\g\6\j\4\2\d\9\q\s\h\z\4\d\6\3\m\5\8\9\1\2\g\p\9\e\p\k\m\b\k\9\n\9\e\h\f\t\6\e\7\m\t\s\n\u\4\p\9\m\4\4\z\9\4\k\1\l\y\6\k\r\l\d\t\c\4\b\7\g\j\l\b ]] 00:09:12.163 17:10:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:12.163 17:10:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:12.163 [2024-11-04 17:10:12.864354] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:12.163 [2024-11-04 17:10:12.864738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60342 ] 00:09:12.423 [2024-11-04 17:10:13.001755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.423 [2024-11-04 17:10:13.062789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.423 [2024-11-04 17:10:13.123283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.423  [2024-11-04T17:10:13.487Z] Copying: 512/512 [B] (average 250 kBps) 00:09:12.683 00:09:12.683 17:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ghwngawrpwfeggmb0li7x2fvjifyi94jojdzw5yrqx9t6x9lq3aobv46eoxtbnzj4vms8kq76f1fsy4pfbxqb4wcly25seb8nx6kykw1ertxh8ttuyl2nm3kvquj2nb5ofvuigkxwsava9y1y0y35phhfb77ma72p3hxrj2i5ozdsyaymajmmt35a1j5y129dehs2e5ep1dsqeb2whyu4465uqkh7g7vn56id3w0xa3lfvm8tvepz4wcyeizwk8vf39du4zgqmij8hps4mz3nvtv7ak79tdzjw7mnbdnqb34l8xjnme03567hfmildfdfo9019yafrcw1m76gl0y16dzvvq5edo6bmohdovg6ma48euafmxcvkt2wag6rv3k38leq84h58g9laqdzex5m58b72kzsnoi70bf5qgigekyjg69gdywwa0fg6j42d9qshz4d63m58912gp9epkmbk9n9ehft6e7mtsnu4p9m44z94k1ly6krldtc4b7gjlb == \g\h\w\n\g\a\w\r\p\w\f\e\g\g\m\b\0\l\i\7\x\2\f\v\j\i\f\y\i\9\4\j\o\j\d\z\w\5\y\r\q\x\9\t\6\x\9\l\q\3\a\o\b\v\4\6\e\o\x\t\b\n\z\j\4\v\m\s\8\k\q\7\6\f\1\f\s\y\4\p\f\b\x\q\b\4\w\c\l\y\2\5\s\e\b\8\n\x\6\k\y\k\w\1\e\r\t\x\h\8\t\t\u\y\l\2\n\m\3\k\v\q\u\j\2\n\b\5\o\f\v\u\i\g\k\x\w\s\a\v\a\9\y\1\y\0\y\3\5\p\h\h\f\b\7\7\m\a\7\2\p\3\h\x\r\j\2\i\5\o\z\d\s\y\a\y\m\a\j\m\m\t\3\5\a\1\j\5\y\1\2\9\d\e\h\s\2\e\5\e\p\1\d\s\q\e\b\2\w\h\y\u\4\4\6\5\u\q\k\h\7\g\7\v\n\5\6\i\d\3\w\0\x\a\3\l\f\v\m\8\t\v\e\p\z\4\w\c\y\e\i\z\w\k\8\v\f\3\9\d\u\4\z\g\q\m\i\j\8\h\p\s\4\m\z\3\n\v\t\v\7\a\k\7\9\t\d\z\j\w\7\m\n\b\d\n\q\b\3\4\l\8\x\j\n\m\e\0\3\5\6\7\h\f\m\i\l\d\f\d\f\o\9\0\1\9\y\a\f\r\c\w\1\m\7\6\g\l\0\y\1\6\d\z\v\v\q\5\e\d\o\6\b\m\o\h\d\o\v\g\6\m\a\4\8\e\u\a\f\m\x\c\v\k\t\2\w\a\g\6\r\v\3\k\3\8\l\e\q\8\4\h\5\8\g\9\l\a\q\d\z\e\x\5\m\5\8\b\7\2\k\z\s\n\o\i\7\0\b\f\5\q\g\i\g\e\k\y\j\g\6\9\g\d\y\w\w\a\0\f\g\6\j\4\2\d\9\q\s\h\z\4\d\6\3\m\5\8\9\1\2\g\p\9\e\p\k\m\b\k\9\n\9\e\h\f\t\6\e\7\m\t\s\n\u\4\p\9\m\4\4\z\9\4\k\1\l\y\6\k\r\l\d\t\c\4\b\7\g\j\l\b ]] 00:09:12.683 17:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:12.683 17:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:12.683 [2024-11-04 17:10:13.422350] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:12.683 [2024-11-04 17:10:13.422486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:09:12.942 [2024-11-04 17:10:13.570730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.942 [2024-11-04 17:10:13.621328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.942 [2024-11-04 17:10:13.676426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.942  [2024-11-04T17:10:14.005Z] Copying: 512/512 [B] (average 166 kBps) 00:09:13.201 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ghwngawrpwfeggmb0li7x2fvjifyi94jojdzw5yrqx9t6x9lq3aobv46eoxtbnzj4vms8kq76f1fsy4pfbxqb4wcly25seb8nx6kykw1ertxh8ttuyl2nm3kvquj2nb5ofvuigkxwsava9y1y0y35phhfb77ma72p3hxrj2i5ozdsyaymajmmt35a1j5y129dehs2e5ep1dsqeb2whyu4465uqkh7g7vn56id3w0xa3lfvm8tvepz4wcyeizwk8vf39du4zgqmij8hps4mz3nvtv7ak79tdzjw7mnbdnqb34l8xjnme03567hfmildfdfo9019yafrcw1m76gl0y16dzvvq5edo6bmohdovg6ma48euafmxcvkt2wag6rv3k38leq84h58g9laqdzex5m58b72kzsnoi70bf5qgigekyjg69gdywwa0fg6j42d9qshz4d63m58912gp9epkmbk9n9ehft6e7mtsnu4p9m44z94k1ly6krldtc4b7gjlb == \g\h\w\n\g\a\w\r\p\w\f\e\g\g\m\b\0\l\i\7\x\2\f\v\j\i\f\y\i\9\4\j\o\j\d\z\w\5\y\r\q\x\9\t\6\x\9\l\q\3\a\o\b\v\4\6\e\o\x\t\b\n\z\j\4\v\m\s\8\k\q\7\6\f\1\f\s\y\4\p\f\b\x\q\b\4\w\c\l\y\2\5\s\e\b\8\n\x\6\k\y\k\w\1\e\r\t\x\h\8\t\t\u\y\l\2\n\m\3\k\v\q\u\j\2\n\b\5\o\f\v\u\i\g\k\x\w\s\a\v\a\9\y\1\y\0\y\3\5\p\h\h\f\b\7\7\m\a\7\2\p\3\h\x\r\j\2\i\5\o\z\d\s\y\a\y\m\a\j\m\m\t\3\5\a\1\j\5\y\1\2\9\d\e\h\s\2\e\5\e\p\1\d\s\q\e\b\2\w\h\y\u\4\4\6\5\u\q\k\h\7\g\7\v\n\5\6\i\d\3\w\0\x\a\3\l\f\v\m\8\t\v\e\p\z\4\w\c\y\e\i\z\w\k\8\v\f\3\9\d\u\4\z\g\q\m\i\j\8\h\p\s\4\m\z\3\n\v\t\v\7\a\k\7\9\t\d\z\j\w\7\m\n\b\d\n\q\b\3\4\l\8\x\j\n\m\e\0\3\5\6\7\h\f\m\i\l\d\f\d\f\o\9\0\1\9\y\a\f\r\c\w\1\m\7\6\g\l\0\y\1\6\d\z\v\v\q\5\e\d\o\6\b\m\o\h\d\o\v\g\6\m\a\4\8\e\u\a\f\m\x\c\v\k\t\2\w\a\g\6\r\v\3\k\3\8\l\e\q\8\4\h\5\8\g\9\l\a\q\d\z\e\x\5\m\5\8\b\7\2\k\z\s\n\o\i\7\0\b\f\5\q\g\i\g\e\k\y\j\g\6\9\g\d\y\w\w\a\0\f\g\6\j\4\2\d\9\q\s\h\z\4\d\6\3\m\5\8\9\1\2\g\p\9\e\p\k\m\b\k\9\n\9\e\h\f\t\6\e\7\m\t\s\n\u\4\p\9\m\4\4\z\9\4\k\1\l\y\6\k\r\l\d\t\c\4\b\7\g\j\l\b ]] 00:09:13.201 00:09:13.201 real 0m4.372s 00:09:13.201 user 0m2.341s 00:09:13.201 sys 0m2.251s 00:09:13.201 ************************************ 00:09:13.201 END TEST dd_flags_misc 00:09:13.201 ************************************ 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:13.201 * Second test run, disabling liburing, forcing AIO 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:13.201 ************************************ 00:09:13.201 START TEST dd_flag_append_forced_aio 00:09:13.201 ************************************ 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=tc3b899dtsqu7wx8l8xsv7xl6u8hj2mx 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=45iojfj46yxxz1y5dpnm5seqnqtl4bd7 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s tc3b899dtsqu7wx8l8xsv7xl6u8hj2mx 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 45iojfj46yxxz1y5dpnm5seqnqtl4bd7 00:09:13.201 17:10:13 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:13.460 [2024-11-04 17:10:14.040748] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:13.460 [2024-11-04 17:10:14.040864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60380 ] 00:09:13.460 [2024-11-04 17:10:14.187317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.460 [2024-11-04 17:10:14.240435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.719 [2024-11-04 17:10:14.298950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.719  [2024-11-04T17:10:14.782Z] Copying: 32/32 [B] (average 31 kBps) 00:09:13.978 00:09:13.978 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 45iojfj46yxxz1y5dpnm5seqnqtl4bd7tc3b899dtsqu7wx8l8xsv7xl6u8hj2mx == \4\5\i\o\j\f\j\4\6\y\x\x\z\1\y\5\d\p\n\m\5\s\e\q\n\q\t\l\4\b\d\7\t\c\3\b\8\9\9\d\t\s\q\u\7\w\x\8\l\8\x\s\v\7\x\l\6\u\8\h\j\2\m\x ]] 00:09:13.978 00:09:13.978 real 0m0.562s 00:09:13.978 user 0m0.302s 00:09:13.978 sys 0m0.138s 00:09:13.978 ************************************ 00:09:13.978 END TEST dd_flag_append_forced_aio 00:09:13.978 ************************************ 00:09:13.978 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:13.979 ************************************ 00:09:13.979 START TEST dd_flag_directory_forced_aio 00:09:13.979 ************************************ 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:13.979 17:10:14 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:13.979 [2024-11-04 17:10:14.650071] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:13.979 [2024-11-04 17:10:14.650201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60407 ] 00:09:14.238 [2024-11-04 17:10:14.800501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.238 [2024-11-04 17:10:14.861956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.238 [2024-11-04 17:10:14.915637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.238 [2024-11-04 17:10:14.948582] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:14.238 [2024-11-04 17:10:14.948668] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:14.238 [2024-11-04 17:10:14.948700] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.497 [2024-11-04 17:10:15.066190] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:14.497 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:14.497 [2024-11-04 17:10:15.178476] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:14.497 [2024-11-04 17:10:15.178597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60416 ] 00:09:14.757 [2024-11-04 17:10:15.317804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.757 [2024-11-04 17:10:15.370587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.757 [2024-11-04 17:10:15.423251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.757 [2024-11-04 17:10:15.458537] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:14.757 [2024-11-04 17:10:15.458600] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:14.757 [2024-11-04 17:10:15.458634] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.016 [2024-11-04 17:10:15.583676] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:15.016 00:09:15.016 real 0m1.057s 00:09:15.016 user 0m0.562s 00:09:15.016 sys 0m0.285s 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.016 ************************************ 00:09:15.016 END TEST dd_flag_directory_forced_aio 00:09:15.016 ************************************ 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:15.016 ************************************ 00:09:15.016 START TEST dd_flag_nofollow_forced_aio 00:09:15.016 ************************************ 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:15.016 17:10:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:15.016 [2024-11-04 17:10:15.757319] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:15.016 [2024-11-04 17:10:15.757426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60445 ] 00:09:15.276 [2024-11-04 17:10:15.898347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.276 [2024-11-04 17:10:15.944363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.276 [2024-11-04 17:10:15.997256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.276 [2024-11-04 17:10:16.031111] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:15.276 [2024-11-04 17:10:16.031179] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:15.276 [2024-11-04 17:10:16.031214] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.578 [2024-11-04 17:10:16.152303] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:15.578 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:15.578 [2024-11-04 17:10:16.271172] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:15.578 [2024-11-04 17:10:16.271300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60454 ] 00:09:15.838 [2024-11-04 17:10:16.414108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.838 [2024-11-04 17:10:16.465245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.838 [2024-11-04 17:10:16.521036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.838 [2024-11-04 17:10:16.556454] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:15.838 [2024-11-04 17:10:16.556495] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:15.838 [2024-11-04 17:10:16.556531] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.097 [2024-11-04 17:10:16.675102] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:16.097 17:10:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.097 [2024-11-04 17:10:16.803731] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:16.097 [2024-11-04 17:10:16.803854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:09:16.355 [2024-11-04 17:10:16.949901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.355 [2024-11-04 17:10:16.994364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.355 [2024-11-04 17:10:17.046306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.355  [2024-11-04T17:10:17.418Z] Copying: 512/512 [B] (average 500 kBps) 00:09:16.614 00:09:16.614 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ln82o3cvr8vuf9tx6kqk5785i3bjvqhddnhpoij7gpsi47tyzourhs9iwmiqojfh5v7ybzpvp3rlw91h4pcevzrcpcbjqsv4x5yy13t31fxvu0pruou4qym4gvemixz0qed4qyq9c1547nntxtg1nc33jqex455wxxbwtxzidowt9744b7n2xd3kujs3gotr80t6ollrzzvb2ba9s4kk1ewgwidc196l3j8btan8olgidzhhdfszzxubf88iwpkn7v45rx1aowqcej1mxguoemrkwmnf0lhv5v761y4th1oidmnrz9ja95kulj9s8fl214kx5auue9wb18ul0hs3eyhbl48sbedin38kszcv1at9bq3lwlbocgckp8qq5zump25kjf9o8mdvfraxr8o2cvx4euuptwqribaiwtbqw4op7x60wslikvvrzdxtppjgeizghly01xh1193ce3462spxwqhuv8kvt0pw6nuez7l8b1l4v4ag5zfyanmb2sk0 == \l\n\8\2\o\3\c\v\r\8\v\u\f\9\t\x\6\k\q\k\5\7\8\5\i\3\b\j\v\q\h\d\d\n\h\p\o\i\j\7\g\p\s\i\4\7\t\y\z\o\u\r\h\s\9\i\w\m\i\q\o\j\f\h\5\v\7\y\b\z\p\v\p\3\r\l\w\9\1\h\4\p\c\e\v\z\r\c\p\c\b\j\q\s\v\4\x\5\y\y\1\3\t\3\1\f\x\v\u\0\p\r\u\o\u\4\q\y\m\4\g\v\e\m\i\x\z\0\q\e\d\4\q\y\q\9\c\1\5\4\7\n\n\t\x\t\g\1\n\c\3\3\j\q\e\x\4\5\5\w\x\x\b\w\t\x\z\i\d\o\w\t\9\7\4\4\b\7\n\2\x\d\3\k\u\j\s\3\g\o\t\r\8\0\t\6\o\l\l\r\z\z\v\b\2\b\a\9\s\4\k\k\1\e\w\g\w\i\d\c\1\9\6\l\3\j\8\b\t\a\n\8\o\l\g\i\d\z\h\h\d\f\s\z\z\x\u\b\f\8\8\i\w\p\k\n\7\v\4\5\r\x\1\a\o\w\q\c\e\j\1\m\x\g\u\o\e\m\r\k\w\m\n\f\0\l\h\v\5\v\7\6\1\y\4\t\h\1\o\i\d\m\n\r\z\9\j\a\9\5\k\u\l\j\9\s\8\f\l\2\1\4\k\x\5\a\u\u\e\9\w\b\1\8\u\l\0\h\s\3\e\y\h\b\l\4\8\s\b\e\d\i\n\3\8\k\s\z\c\v\1\a\t\9\b\q\3\l\w\l\b\o\c\g\c\k\p\8\q\q\5\z\u\m\p\2\5\k\j\f\9\o\8\m\d\v\f\r\a\x\r\8\o\2\c\v\x\4\e\u\u\p\t\w\q\r\i\b\a\i\w\t\b\q\w\4\o\p\7\x\6\0\w\s\l\i\k\v\v\r\z\d\x\t\p\p\j\g\e\i\z\g\h\l\y\0\1\x\h\1\1\9\3\c\e\3\4\6\2\s\p\x\w\q\h\u\v\8\k\v\t\0\p\w\6\n\u\e\z\7\l\8\b\1\l\4\v\4\a\g\5\z\f\y\a\n\m\b\2\s\k\0 ]] 00:09:16.614 00:09:16.614 real 0m1.599s 00:09:16.614 user 0m0.854s 00:09:16.614 sys 0m0.418s 00:09:16.614 ************************************ 00:09:16.614 END TEST dd_flag_nofollow_forced_aio 00:09:16.615 ************************************ 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:16.615 ************************************ 00:09:16.615 START TEST dd_flag_noatime_forced_aio 00:09:16.615 ************************************ 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1730740217 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1730740217 00:09:16.615 17:10:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:09:17.993 17:10:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:17.993 [2024-11-04 17:10:18.432655] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:17.993 [2024-11-04 17:10:18.432771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:09:17.993 [2024-11-04 17:10:18.581210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.993 [2024-11-04 17:10:18.641250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.993 [2024-11-04 17:10:18.700258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.993  [2024-11-04T17:10:19.056Z] Copying: 512/512 [B] (average 500 kBps) 00:09:18.252 00:09:18.252 17:10:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:18.252 17:10:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1730740217 )) 00:09:18.252 17:10:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:18.252 17:10:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1730740217 )) 00:09:18.252 17:10:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:18.252 [2024-11-04 17:10:19.008628] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:18.252 [2024-11-04 17:10:19.008758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60514 ] 00:09:18.512 [2024-11-04 17:10:19.147761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.512 [2024-11-04 17:10:19.210170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.512 [2024-11-04 17:10:19.264019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.512  [2024-11-04T17:10:19.574Z] Copying: 512/512 [B] (average 500 kBps) 00:09:18.770 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1730740219 )) 00:09:18.770 00:09:18.770 real 0m2.170s 00:09:18.770 user 0m0.605s 00:09:18.770 sys 0m0.323s 00:09:18.770 ************************************ 00:09:18.770 END TEST dd_flag_noatime_forced_aio 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:18.770 ************************************ 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.770 17:10:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:19.032 ************************************ 00:09:19.032 START TEST dd_flags_misc_forced_aio 00:09:19.032 ************************************ 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:19.032 17:10:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:19.032 [2024-11-04 17:10:19.644564] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:19.032 [2024-11-04 17:10:19.644661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60540 ] 00:09:19.032 [2024-11-04 17:10:19.791944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.292 [2024-11-04 17:10:19.851215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.292 [2024-11-04 17:10:19.913253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.292  [2024-11-04T17:10:20.355Z] Copying: 512/512 [B] (average 500 kBps) 00:09:19.551 00:09:19.551 17:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cxp0p0vkv4t4ceto1fokiz9w6srtw5pq9ypwxspzy1nsdvr5fz42yno41my5l6kdsae3d1c35zlfvr699mtqsoy42lpmrvp5xfhpfsb24iugfhqgcqzj6fr6memd5u4f95wf2d6tgawprmbdnzcd22stg49dndr8dnb84j3hlnq79715jem8hnualyz3sx0l0go7jak1wq3du6dkvbvk51fnl6abo63nb7og9zimmnse75cn2vapwxucsnhyylmegcb3lb3wrm74w1t1htooh9x43wl30i2kl8sarle9tj3oknnuargyxvrlgi4e2hmtbthf9wz88filetqh6dql277p644k5lpxolteq4cnqhvmz5cfwlz3t7j21lhyo61d2o4c7vegrf8pztech6abnwdcqnlh91rs5y92ger05s9k9354ihbsrivuydqrv7cs4xbs94i3g51h4o590hc1wir87k4vuwvf6zjvifrk49q3llel075fz9nl4s51xce1 == \c\x\p\0\p\0\v\k\v\4\t\4\c\e\t\o\1\f\o\k\i\z\9\w\6\s\r\t\w\5\p\q\9\y\p\w\x\s\p\z\y\1\n\s\d\v\r\5\f\z\4\2\y\n\o\4\1\m\y\5\l\6\k\d\s\a\e\3\d\1\c\3\5\z\l\f\v\r\6\9\9\m\t\q\s\o\y\4\2\l\p\m\r\v\p\5\x\f\h\p\f\s\b\2\4\i\u\g\f\h\q\g\c\q\z\j\6\f\r\6\m\e\m\d\5\u\4\f\9\5\w\f\2\d\6\t\g\a\w\p\r\m\b\d\n\z\c\d\2\2\s\t\g\4\9\d\n\d\r\8\d\n\b\8\4\j\3\h\l\n\q\7\9\7\1\5\j\e\m\8\h\n\u\a\l\y\z\3\s\x\0\l\0\g\o\7\j\a\k\1\w\q\3\d\u\6\d\k\v\b\v\k\5\1\f\n\l\6\a\b\o\6\3\n\b\7\o\g\9\z\i\m\m\n\s\e\7\5\c\n\2\v\a\p\w\x\u\c\s\n\h\y\y\l\m\e\g\c\b\3\l\b\3\w\r\m\7\4\w\1\t\1\h\t\o\o\h\9\x\4\3\w\l\3\0\i\2\k\l\8\s\a\r\l\e\9\t\j\3\o\k\n\n\u\a\r\g\y\x\v\r\l\g\i\4\e\2\h\m\t\b\t\h\f\9\w\z\8\8\f\i\l\e\t\q\h\6\d\q\l\2\7\7\p\6\4\4\k\5\l\p\x\o\l\t\e\q\4\c\n\q\h\v\m\z\5\c\f\w\l\z\3\t\7\j\2\1\l\h\y\o\6\1\d\2\o\4\c\7\v\e\g\r\f\8\p\z\t\e\c\h\6\a\b\n\w\d\c\q\n\l\h\9\1\r\s\5\y\9\2\g\e\r\0\5\s\9\k\9\3\5\4\i\h\b\s\r\i\v\u\y\d\q\r\v\7\c\s\4\x\b\s\9\4\i\3\g\5\1\h\4\o\5\9\0\h\c\1\w\i\r\8\7\k\4\v\u\w\v\f\6\z\j\v\i\f\r\k\4\9\q\3\l\l\e\l\0\7\5\f\z\9\n\l\4\s\5\1\x\c\e\1 ]] 00:09:19.551 17:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:19.551 17:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:19.551 [2024-11-04 17:10:20.223780] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:19.551 [2024-11-04 17:10:20.223923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60553 ] 00:09:19.811 [2024-11-04 17:10:20.367454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.811 [2024-11-04 17:10:20.417395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.811 [2024-11-04 17:10:20.473511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.811  [2024-11-04T17:10:20.875Z] Copying: 512/512 [B] (average 500 kBps) 00:09:20.071 00:09:20.071 17:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cxp0p0vkv4t4ceto1fokiz9w6srtw5pq9ypwxspzy1nsdvr5fz42yno41my5l6kdsae3d1c35zlfvr699mtqsoy42lpmrvp5xfhpfsb24iugfhqgcqzj6fr6memd5u4f95wf2d6tgawprmbdnzcd22stg49dndr8dnb84j3hlnq79715jem8hnualyz3sx0l0go7jak1wq3du6dkvbvk51fnl6abo63nb7og9zimmnse75cn2vapwxucsnhyylmegcb3lb3wrm74w1t1htooh9x43wl30i2kl8sarle9tj3oknnuargyxvrlgi4e2hmtbthf9wz88filetqh6dql277p644k5lpxolteq4cnqhvmz5cfwlz3t7j21lhyo61d2o4c7vegrf8pztech6abnwdcqnlh91rs5y92ger05s9k9354ihbsrivuydqrv7cs4xbs94i3g51h4o590hc1wir87k4vuwvf6zjvifrk49q3llel075fz9nl4s51xce1 == \c\x\p\0\p\0\v\k\v\4\t\4\c\e\t\o\1\f\o\k\i\z\9\w\6\s\r\t\w\5\p\q\9\y\p\w\x\s\p\z\y\1\n\s\d\v\r\5\f\z\4\2\y\n\o\4\1\m\y\5\l\6\k\d\s\a\e\3\d\1\c\3\5\z\l\f\v\r\6\9\9\m\t\q\s\o\y\4\2\l\p\m\r\v\p\5\x\f\h\p\f\s\b\2\4\i\u\g\f\h\q\g\c\q\z\j\6\f\r\6\m\e\m\d\5\u\4\f\9\5\w\f\2\d\6\t\g\a\w\p\r\m\b\d\n\z\c\d\2\2\s\t\g\4\9\d\n\d\r\8\d\n\b\8\4\j\3\h\l\n\q\7\9\7\1\5\j\e\m\8\h\n\u\a\l\y\z\3\s\x\0\l\0\g\o\7\j\a\k\1\w\q\3\d\u\6\d\k\v\b\v\k\5\1\f\n\l\6\a\b\o\6\3\n\b\7\o\g\9\z\i\m\m\n\s\e\7\5\c\n\2\v\a\p\w\x\u\c\s\n\h\y\y\l\m\e\g\c\b\3\l\b\3\w\r\m\7\4\w\1\t\1\h\t\o\o\h\9\x\4\3\w\l\3\0\i\2\k\l\8\s\a\r\l\e\9\t\j\3\o\k\n\n\u\a\r\g\y\x\v\r\l\g\i\4\e\2\h\m\t\b\t\h\f\9\w\z\8\8\f\i\l\e\t\q\h\6\d\q\l\2\7\7\p\6\4\4\k\5\l\p\x\o\l\t\e\q\4\c\n\q\h\v\m\z\5\c\f\w\l\z\3\t\7\j\2\1\l\h\y\o\6\1\d\2\o\4\c\7\v\e\g\r\f\8\p\z\t\e\c\h\6\a\b\n\w\d\c\q\n\l\h\9\1\r\s\5\y\9\2\g\e\r\0\5\s\9\k\9\3\5\4\i\h\b\s\r\i\v\u\y\d\q\r\v\7\c\s\4\x\b\s\9\4\i\3\g\5\1\h\4\o\5\9\0\h\c\1\w\i\r\8\7\k\4\v\u\w\v\f\6\z\j\v\i\f\r\k\4\9\q\3\l\l\e\l\0\7\5\f\z\9\n\l\4\s\5\1\x\c\e\1 ]] 00:09:20.071 17:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:20.071 17:10:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:20.071 [2024-11-04 17:10:20.786030] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:20.071 [2024-11-04 17:10:20.786123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60561 ] 00:09:20.335 [2024-11-04 17:10:20.928755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.335 [2024-11-04 17:10:20.977424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.335 [2024-11-04 17:10:21.033372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.335  [2024-11-04T17:10:21.399Z] Copying: 512/512 [B] (average 125 kBps) 00:09:20.595 00:09:20.595 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cxp0p0vkv4t4ceto1fokiz9w6srtw5pq9ypwxspzy1nsdvr5fz42yno41my5l6kdsae3d1c35zlfvr699mtqsoy42lpmrvp5xfhpfsb24iugfhqgcqzj6fr6memd5u4f95wf2d6tgawprmbdnzcd22stg49dndr8dnb84j3hlnq79715jem8hnualyz3sx0l0go7jak1wq3du6dkvbvk51fnl6abo63nb7og9zimmnse75cn2vapwxucsnhyylmegcb3lb3wrm74w1t1htooh9x43wl30i2kl8sarle9tj3oknnuargyxvrlgi4e2hmtbthf9wz88filetqh6dql277p644k5lpxolteq4cnqhvmz5cfwlz3t7j21lhyo61d2o4c7vegrf8pztech6abnwdcqnlh91rs5y92ger05s9k9354ihbsrivuydqrv7cs4xbs94i3g51h4o590hc1wir87k4vuwvf6zjvifrk49q3llel075fz9nl4s51xce1 == \c\x\p\0\p\0\v\k\v\4\t\4\c\e\t\o\1\f\o\k\i\z\9\w\6\s\r\t\w\5\p\q\9\y\p\w\x\s\p\z\y\1\n\s\d\v\r\5\f\z\4\2\y\n\o\4\1\m\y\5\l\6\k\d\s\a\e\3\d\1\c\3\5\z\l\f\v\r\6\9\9\m\t\q\s\o\y\4\2\l\p\m\r\v\p\5\x\f\h\p\f\s\b\2\4\i\u\g\f\h\q\g\c\q\z\j\6\f\r\6\m\e\m\d\5\u\4\f\9\5\w\f\2\d\6\t\g\a\w\p\r\m\b\d\n\z\c\d\2\2\s\t\g\4\9\d\n\d\r\8\d\n\b\8\4\j\3\h\l\n\q\7\9\7\1\5\j\e\m\8\h\n\u\a\l\y\z\3\s\x\0\l\0\g\o\7\j\a\k\1\w\q\3\d\u\6\d\k\v\b\v\k\5\1\f\n\l\6\a\b\o\6\3\n\b\7\o\g\9\z\i\m\m\n\s\e\7\5\c\n\2\v\a\p\w\x\u\c\s\n\h\y\y\l\m\e\g\c\b\3\l\b\3\w\r\m\7\4\w\1\t\1\h\t\o\o\h\9\x\4\3\w\l\3\0\i\2\k\l\8\s\a\r\l\e\9\t\j\3\o\k\n\n\u\a\r\g\y\x\v\r\l\g\i\4\e\2\h\m\t\b\t\h\f\9\w\z\8\8\f\i\l\e\t\q\h\6\d\q\l\2\7\7\p\6\4\4\k\5\l\p\x\o\l\t\e\q\4\c\n\q\h\v\m\z\5\c\f\w\l\z\3\t\7\j\2\1\l\h\y\o\6\1\d\2\o\4\c\7\v\e\g\r\f\8\p\z\t\e\c\h\6\a\b\n\w\d\c\q\n\l\h\9\1\r\s\5\y\9\2\g\e\r\0\5\s\9\k\9\3\5\4\i\h\b\s\r\i\v\u\y\d\q\r\v\7\c\s\4\x\b\s\9\4\i\3\g\5\1\h\4\o\5\9\0\h\c\1\w\i\r\8\7\k\4\v\u\w\v\f\6\z\j\v\i\f\r\k\4\9\q\3\l\l\e\l\0\7\5\f\z\9\n\l\4\s\5\1\x\c\e\1 ]] 00:09:20.595 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:20.595 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:20.595 [2024-11-04 17:10:21.318673] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:20.595 [2024-11-04 17:10:21.318776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60568 ] 00:09:20.854 [2024-11-04 17:10:21.460942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.854 [2024-11-04 17:10:21.520627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.854 [2024-11-04 17:10:21.574543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.854  [2024-11-04T17:10:21.917Z] Copying: 512/512 [B] (average 250 kBps) 00:09:21.113 00:09:21.113 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cxp0p0vkv4t4ceto1fokiz9w6srtw5pq9ypwxspzy1nsdvr5fz42yno41my5l6kdsae3d1c35zlfvr699mtqsoy42lpmrvp5xfhpfsb24iugfhqgcqzj6fr6memd5u4f95wf2d6tgawprmbdnzcd22stg49dndr8dnb84j3hlnq79715jem8hnualyz3sx0l0go7jak1wq3du6dkvbvk51fnl6abo63nb7og9zimmnse75cn2vapwxucsnhyylmegcb3lb3wrm74w1t1htooh9x43wl30i2kl8sarle9tj3oknnuargyxvrlgi4e2hmtbthf9wz88filetqh6dql277p644k5lpxolteq4cnqhvmz5cfwlz3t7j21lhyo61d2o4c7vegrf8pztech6abnwdcqnlh91rs5y92ger05s9k9354ihbsrivuydqrv7cs4xbs94i3g51h4o590hc1wir87k4vuwvf6zjvifrk49q3llel075fz9nl4s51xce1 == \c\x\p\0\p\0\v\k\v\4\t\4\c\e\t\o\1\f\o\k\i\z\9\w\6\s\r\t\w\5\p\q\9\y\p\w\x\s\p\z\y\1\n\s\d\v\r\5\f\z\4\2\y\n\o\4\1\m\y\5\l\6\k\d\s\a\e\3\d\1\c\3\5\z\l\f\v\r\6\9\9\m\t\q\s\o\y\4\2\l\p\m\r\v\p\5\x\f\h\p\f\s\b\2\4\i\u\g\f\h\q\g\c\q\z\j\6\f\r\6\m\e\m\d\5\u\4\f\9\5\w\f\2\d\6\t\g\a\w\p\r\m\b\d\n\z\c\d\2\2\s\t\g\4\9\d\n\d\r\8\d\n\b\8\4\j\3\h\l\n\q\7\9\7\1\5\j\e\m\8\h\n\u\a\l\y\z\3\s\x\0\l\0\g\o\7\j\a\k\1\w\q\3\d\u\6\d\k\v\b\v\k\5\1\f\n\l\6\a\b\o\6\3\n\b\7\o\g\9\z\i\m\m\n\s\e\7\5\c\n\2\v\a\p\w\x\u\c\s\n\h\y\y\l\m\e\g\c\b\3\l\b\3\w\r\m\7\4\w\1\t\1\h\t\o\o\h\9\x\4\3\w\l\3\0\i\2\k\l\8\s\a\r\l\e\9\t\j\3\o\k\n\n\u\a\r\g\y\x\v\r\l\g\i\4\e\2\h\m\t\b\t\h\f\9\w\z\8\8\f\i\l\e\t\q\h\6\d\q\l\2\7\7\p\6\4\4\k\5\l\p\x\o\l\t\e\q\4\c\n\q\h\v\m\z\5\c\f\w\l\z\3\t\7\j\2\1\l\h\y\o\6\1\d\2\o\4\c\7\v\e\g\r\f\8\p\z\t\e\c\h\6\a\b\n\w\d\c\q\n\l\h\9\1\r\s\5\y\9\2\g\e\r\0\5\s\9\k\9\3\5\4\i\h\b\s\r\i\v\u\y\d\q\r\v\7\c\s\4\x\b\s\9\4\i\3\g\5\1\h\4\o\5\9\0\h\c\1\w\i\r\8\7\k\4\v\u\w\v\f\6\z\j\v\i\f\r\k\4\9\q\3\l\l\e\l\0\7\5\f\z\9\n\l\4\s\5\1\x\c\e\1 ]] 00:09:21.113 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:21.113 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:21.113 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:21.113 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:21.113 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:21.113 17:10:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:21.113 [2024-11-04 17:10:21.894802] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:21.113 [2024-11-04 17:10:21.894911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60576 ] 00:09:21.372 [2024-11-04 17:10:22.043022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.372 [2024-11-04 17:10:22.088480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.372 [2024-11-04 17:10:22.144608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:21.631  [2024-11-04T17:10:22.435Z] Copying: 512/512 [B] (average 500 kBps) 00:09:21.631 00:09:21.631 17:10:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s0gpopo0nikypcdpz09v8xlkm2jq505w636sn3fzxmdaklwfnoodpofdh7fl01nn9572a4qofrx3cvmqlekbgbrl9mzijhz8342sv8hqozim5rn46kxxkujk8qbjvzqe1d0z6xiuggn2u8tl61mzr58sr4by2ikk4mlnaf2e1f6inanz280bn5twjvp5nj5d4xgpda8alsjknb08gid47d1akdal2j55rixbn2ys9vh4w2nmcbgvekm5zwhfzbs95znfcqv60ubnlwym2o4707u83zseow07bymaqtqzsly4sv9zmr1l4q7ejr9a22udvilfw5efcyy9b2b21gi32mr36aud0dilzvcxl6u1c0wfmfw0a0g93y88xo90js8hjv7i49cn2nmiupewj27w42myzgytx57mlujulzlufmiuhypcuja1mw8ijpbha1tqls3jrazrsloi49o4ivkuibme7b6mj4hb7prdep8kllhdv978uel89ijd5k1yzt9t == \s\0\g\p\o\p\o\0\n\i\k\y\p\c\d\p\z\0\9\v\8\x\l\k\m\2\j\q\5\0\5\w\6\3\6\s\n\3\f\z\x\m\d\a\k\l\w\f\n\o\o\d\p\o\f\d\h\7\f\l\0\1\n\n\9\5\7\2\a\4\q\o\f\r\x\3\c\v\m\q\l\e\k\b\g\b\r\l\9\m\z\i\j\h\z\8\3\4\2\s\v\8\h\q\o\z\i\m\5\r\n\4\6\k\x\x\k\u\j\k\8\q\b\j\v\z\q\e\1\d\0\z\6\x\i\u\g\g\n\2\u\8\t\l\6\1\m\z\r\5\8\s\r\4\b\y\2\i\k\k\4\m\l\n\a\f\2\e\1\f\6\i\n\a\n\z\2\8\0\b\n\5\t\w\j\v\p\5\n\j\5\d\4\x\g\p\d\a\8\a\l\s\j\k\n\b\0\8\g\i\d\4\7\d\1\a\k\d\a\l\2\j\5\5\r\i\x\b\n\2\y\s\9\v\h\4\w\2\n\m\c\b\g\v\e\k\m\5\z\w\h\f\z\b\s\9\5\z\n\f\c\q\v\6\0\u\b\n\l\w\y\m\2\o\4\7\0\7\u\8\3\z\s\e\o\w\0\7\b\y\m\a\q\t\q\z\s\l\y\4\s\v\9\z\m\r\1\l\4\q\7\e\j\r\9\a\2\2\u\d\v\i\l\f\w\5\e\f\c\y\y\9\b\2\b\2\1\g\i\3\2\m\r\3\6\a\u\d\0\d\i\l\z\v\c\x\l\6\u\1\c\0\w\f\m\f\w\0\a\0\g\9\3\y\8\8\x\o\9\0\j\s\8\h\j\v\7\i\4\9\c\n\2\n\m\i\u\p\e\w\j\2\7\w\4\2\m\y\z\g\y\t\x\5\7\m\l\u\j\u\l\z\l\u\f\m\i\u\h\y\p\c\u\j\a\1\m\w\8\i\j\p\b\h\a\1\t\q\l\s\3\j\r\a\z\r\s\l\o\i\4\9\o\4\i\v\k\u\i\b\m\e\7\b\6\m\j\4\h\b\7\p\r\d\e\p\8\k\l\l\h\d\v\9\7\8\u\e\l\8\9\i\j\d\5\k\1\y\z\t\9\t ]] 00:09:21.631 17:10:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:21.632 17:10:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:21.891 [2024-11-04 17:10:22.454913] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:21.891 [2024-11-04 17:10:22.455018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60583 ] 00:09:21.891 [2024-11-04 17:10:22.603124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.891 [2024-11-04 17:10:22.663495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.149 [2024-11-04 17:10:22.722017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.149  [2024-11-04T17:10:23.212Z] Copying: 512/512 [B] (average 500 kBps) 00:09:22.408 00:09:22.408 17:10:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s0gpopo0nikypcdpz09v8xlkm2jq505w636sn3fzxmdaklwfnoodpofdh7fl01nn9572a4qofrx3cvmqlekbgbrl9mzijhz8342sv8hqozim5rn46kxxkujk8qbjvzqe1d0z6xiuggn2u8tl61mzr58sr4by2ikk4mlnaf2e1f6inanz280bn5twjvp5nj5d4xgpda8alsjknb08gid47d1akdal2j55rixbn2ys9vh4w2nmcbgvekm5zwhfzbs95znfcqv60ubnlwym2o4707u83zseow07bymaqtqzsly4sv9zmr1l4q7ejr9a22udvilfw5efcyy9b2b21gi32mr36aud0dilzvcxl6u1c0wfmfw0a0g93y88xo90js8hjv7i49cn2nmiupewj27w42myzgytx57mlujulzlufmiuhypcuja1mw8ijpbha1tqls3jrazrsloi49o4ivkuibme7b6mj4hb7prdep8kllhdv978uel89ijd5k1yzt9t == \s\0\g\p\o\p\o\0\n\i\k\y\p\c\d\p\z\0\9\v\8\x\l\k\m\2\j\q\5\0\5\w\6\3\6\s\n\3\f\z\x\m\d\a\k\l\w\f\n\o\o\d\p\o\f\d\h\7\f\l\0\1\n\n\9\5\7\2\a\4\q\o\f\r\x\3\c\v\m\q\l\e\k\b\g\b\r\l\9\m\z\i\j\h\z\8\3\4\2\s\v\8\h\q\o\z\i\m\5\r\n\4\6\k\x\x\k\u\j\k\8\q\b\j\v\z\q\e\1\d\0\z\6\x\i\u\g\g\n\2\u\8\t\l\6\1\m\z\r\5\8\s\r\4\b\y\2\i\k\k\4\m\l\n\a\f\2\e\1\f\6\i\n\a\n\z\2\8\0\b\n\5\t\w\j\v\p\5\n\j\5\d\4\x\g\p\d\a\8\a\l\s\j\k\n\b\0\8\g\i\d\4\7\d\1\a\k\d\a\l\2\j\5\5\r\i\x\b\n\2\y\s\9\v\h\4\w\2\n\m\c\b\g\v\e\k\m\5\z\w\h\f\z\b\s\9\5\z\n\f\c\q\v\6\0\u\b\n\l\w\y\m\2\o\4\7\0\7\u\8\3\z\s\e\o\w\0\7\b\y\m\a\q\t\q\z\s\l\y\4\s\v\9\z\m\r\1\l\4\q\7\e\j\r\9\a\2\2\u\d\v\i\l\f\w\5\e\f\c\y\y\9\b\2\b\2\1\g\i\3\2\m\r\3\6\a\u\d\0\d\i\l\z\v\c\x\l\6\u\1\c\0\w\f\m\f\w\0\a\0\g\9\3\y\8\8\x\o\9\0\j\s\8\h\j\v\7\i\4\9\c\n\2\n\m\i\u\p\e\w\j\2\7\w\4\2\m\y\z\g\y\t\x\5\7\m\l\u\j\u\l\z\l\u\f\m\i\u\h\y\p\c\u\j\a\1\m\w\8\i\j\p\b\h\a\1\t\q\l\s\3\j\r\a\z\r\s\l\o\i\4\9\o\4\i\v\k\u\i\b\m\e\7\b\6\m\j\4\h\b\7\p\r\d\e\p\8\k\l\l\h\d\v\9\7\8\u\e\l\8\9\i\j\d\5\k\1\y\z\t\9\t ]] 00:09:22.408 17:10:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:22.408 17:10:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:22.408 [2024-11-04 17:10:23.018572] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:22.409 [2024-11-04 17:10:23.018665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60591 ] 00:09:22.409 [2024-11-04 17:10:23.167878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.667 [2024-11-04 17:10:23.233915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.667 [2024-11-04 17:10:23.291524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.667  [2024-11-04T17:10:23.731Z] Copying: 512/512 [B] (average 500 kBps) 00:09:22.927 00:09:22.928 17:10:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s0gpopo0nikypcdpz09v8xlkm2jq505w636sn3fzxmdaklwfnoodpofdh7fl01nn9572a4qofrx3cvmqlekbgbrl9mzijhz8342sv8hqozim5rn46kxxkujk8qbjvzqe1d0z6xiuggn2u8tl61mzr58sr4by2ikk4mlnaf2e1f6inanz280bn5twjvp5nj5d4xgpda8alsjknb08gid47d1akdal2j55rixbn2ys9vh4w2nmcbgvekm5zwhfzbs95znfcqv60ubnlwym2o4707u83zseow07bymaqtqzsly4sv9zmr1l4q7ejr9a22udvilfw5efcyy9b2b21gi32mr36aud0dilzvcxl6u1c0wfmfw0a0g93y88xo90js8hjv7i49cn2nmiupewj27w42myzgytx57mlujulzlufmiuhypcuja1mw8ijpbha1tqls3jrazrsloi49o4ivkuibme7b6mj4hb7prdep8kllhdv978uel89ijd5k1yzt9t == \s\0\g\p\o\p\o\0\n\i\k\y\p\c\d\p\z\0\9\v\8\x\l\k\m\2\j\q\5\0\5\w\6\3\6\s\n\3\f\z\x\m\d\a\k\l\w\f\n\o\o\d\p\o\f\d\h\7\f\l\0\1\n\n\9\5\7\2\a\4\q\o\f\r\x\3\c\v\m\q\l\e\k\b\g\b\r\l\9\m\z\i\j\h\z\8\3\4\2\s\v\8\h\q\o\z\i\m\5\r\n\4\6\k\x\x\k\u\j\k\8\q\b\j\v\z\q\e\1\d\0\z\6\x\i\u\g\g\n\2\u\8\t\l\6\1\m\z\r\5\8\s\r\4\b\y\2\i\k\k\4\m\l\n\a\f\2\e\1\f\6\i\n\a\n\z\2\8\0\b\n\5\t\w\j\v\p\5\n\j\5\d\4\x\g\p\d\a\8\a\l\s\j\k\n\b\0\8\g\i\d\4\7\d\1\a\k\d\a\l\2\j\5\5\r\i\x\b\n\2\y\s\9\v\h\4\w\2\n\m\c\b\g\v\e\k\m\5\z\w\h\f\z\b\s\9\5\z\n\f\c\q\v\6\0\u\b\n\l\w\y\m\2\o\4\7\0\7\u\8\3\z\s\e\o\w\0\7\b\y\m\a\q\t\q\z\s\l\y\4\s\v\9\z\m\r\1\l\4\q\7\e\j\r\9\a\2\2\u\d\v\i\l\f\w\5\e\f\c\y\y\9\b\2\b\2\1\g\i\3\2\m\r\3\6\a\u\d\0\d\i\l\z\v\c\x\l\6\u\1\c\0\w\f\m\f\w\0\a\0\g\9\3\y\8\8\x\o\9\0\j\s\8\h\j\v\7\i\4\9\c\n\2\n\m\i\u\p\e\w\j\2\7\w\4\2\m\y\z\g\y\t\x\5\7\m\l\u\j\u\l\z\l\u\f\m\i\u\h\y\p\c\u\j\a\1\m\w\8\i\j\p\b\h\a\1\t\q\l\s\3\j\r\a\z\r\s\l\o\i\4\9\o\4\i\v\k\u\i\b\m\e\7\b\6\m\j\4\h\b\7\p\r\d\e\p\8\k\l\l\h\d\v\9\7\8\u\e\l\8\9\i\j\d\5\k\1\y\z\t\9\t ]] 00:09:22.928 17:10:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:22.928 17:10:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:22.928 [2024-11-04 17:10:23.595533] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:22.928 [2024-11-04 17:10:23.595658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60598 ] 00:09:23.187 [2024-11-04 17:10:23.744502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.187 [2024-11-04 17:10:23.801984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.187 [2024-11-04 17:10:23.854873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.187  [2024-11-04T17:10:24.250Z] Copying: 512/512 [B] (average 500 kBps) 00:09:23.446 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s0gpopo0nikypcdpz09v8xlkm2jq505w636sn3fzxmdaklwfnoodpofdh7fl01nn9572a4qofrx3cvmqlekbgbrl9mzijhz8342sv8hqozim5rn46kxxkujk8qbjvzqe1d0z6xiuggn2u8tl61mzr58sr4by2ikk4mlnaf2e1f6inanz280bn5twjvp5nj5d4xgpda8alsjknb08gid47d1akdal2j55rixbn2ys9vh4w2nmcbgvekm5zwhfzbs95znfcqv60ubnlwym2o4707u83zseow07bymaqtqzsly4sv9zmr1l4q7ejr9a22udvilfw5efcyy9b2b21gi32mr36aud0dilzvcxl6u1c0wfmfw0a0g93y88xo90js8hjv7i49cn2nmiupewj27w42myzgytx57mlujulzlufmiuhypcuja1mw8ijpbha1tqls3jrazrsloi49o4ivkuibme7b6mj4hb7prdep8kllhdv978uel89ijd5k1yzt9t == \s\0\g\p\o\p\o\0\n\i\k\y\p\c\d\p\z\0\9\v\8\x\l\k\m\2\j\q\5\0\5\w\6\3\6\s\n\3\f\z\x\m\d\a\k\l\w\f\n\o\o\d\p\o\f\d\h\7\f\l\0\1\n\n\9\5\7\2\a\4\q\o\f\r\x\3\c\v\m\q\l\e\k\b\g\b\r\l\9\m\z\i\j\h\z\8\3\4\2\s\v\8\h\q\o\z\i\m\5\r\n\4\6\k\x\x\k\u\j\k\8\q\b\j\v\z\q\e\1\d\0\z\6\x\i\u\g\g\n\2\u\8\t\l\6\1\m\z\r\5\8\s\r\4\b\y\2\i\k\k\4\m\l\n\a\f\2\e\1\f\6\i\n\a\n\z\2\8\0\b\n\5\t\w\j\v\p\5\n\j\5\d\4\x\g\p\d\a\8\a\l\s\j\k\n\b\0\8\g\i\d\4\7\d\1\a\k\d\a\l\2\j\5\5\r\i\x\b\n\2\y\s\9\v\h\4\w\2\n\m\c\b\g\v\e\k\m\5\z\w\h\f\z\b\s\9\5\z\n\f\c\q\v\6\0\u\b\n\l\w\y\m\2\o\4\7\0\7\u\8\3\z\s\e\o\w\0\7\b\y\m\a\q\t\q\z\s\l\y\4\s\v\9\z\m\r\1\l\4\q\7\e\j\r\9\a\2\2\u\d\v\i\l\f\w\5\e\f\c\y\y\9\b\2\b\2\1\g\i\3\2\m\r\3\6\a\u\d\0\d\i\l\z\v\c\x\l\6\u\1\c\0\w\f\m\f\w\0\a\0\g\9\3\y\8\8\x\o\9\0\j\s\8\h\j\v\7\i\4\9\c\n\2\n\m\i\u\p\e\w\j\2\7\w\4\2\m\y\z\g\y\t\x\5\7\m\l\u\j\u\l\z\l\u\f\m\i\u\h\y\p\c\u\j\a\1\m\w\8\i\j\p\b\h\a\1\t\q\l\s\3\j\r\a\z\r\s\l\o\i\4\9\o\4\i\v\k\u\i\b\m\e\7\b\6\m\j\4\h\b\7\p\r\d\e\p\8\k\l\l\h\d\v\9\7\8\u\e\l\8\9\i\j\d\5\k\1\y\z\t\9\t ]] 00:09:23.447 00:09:23.447 real 0m4.518s 00:09:23.447 user 0m2.385s 00:09:23.447 sys 0m1.147s 00:09:23.447 ************************************ 00:09:23.447 END TEST dd_flags_misc_forced_aio 00:09:23.447 ************************************ 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:23.447 00:09:23.447 real 0m20.433s 00:09:23.447 user 0m9.747s 00:09:23.447 sys 0m6.667s 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.447 ************************************ 00:09:23.447 END TEST spdk_dd_posix 00:09:23.447 ************************************ 00:09:23.447 17:10:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:23.447 17:10:24 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:23.447 17:10:24 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:23.447 17:10:24 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.447 17:10:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:23.447 ************************************ 00:09:23.447 START TEST spdk_dd_malloc 00:09:23.447 ************************************ 00:09:23.447 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:23.707 * Looking for test storage... 00:09:23.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.707 --rc genhtml_branch_coverage=1 00:09:23.707 --rc genhtml_function_coverage=1 00:09:23.707 --rc genhtml_legend=1 00:09:23.707 --rc geninfo_all_blocks=1 00:09:23.707 --rc geninfo_unexecuted_blocks=1 00:09:23.707 00:09:23.707 ' 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.707 --rc genhtml_branch_coverage=1 00:09:23.707 --rc genhtml_function_coverage=1 00:09:23.707 --rc genhtml_legend=1 00:09:23.707 --rc geninfo_all_blocks=1 00:09:23.707 --rc geninfo_unexecuted_blocks=1 00:09:23.707 00:09:23.707 ' 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.707 --rc genhtml_branch_coverage=1 00:09:23.707 --rc genhtml_function_coverage=1 00:09:23.707 --rc genhtml_legend=1 00:09:23.707 --rc geninfo_all_blocks=1 00:09:23.707 --rc geninfo_unexecuted_blocks=1 00:09:23.707 00:09:23.707 ' 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:23.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.707 --rc genhtml_branch_coverage=1 00:09:23.707 --rc genhtml_function_coverage=1 00:09:23.707 --rc genhtml_legend=1 00:09:23.707 --rc geninfo_all_blocks=1 00:09:23.707 --rc geninfo_unexecuted_blocks=1 00:09:23.707 00:09:23.707 ' 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.707 17:10:24 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:23.708 ************************************ 00:09:23.708 START TEST dd_malloc_copy 00:09:23.708 ************************************ 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:23.708 17:10:24 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:23.708 [2024-11-04 17:10:24.443535] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:23.708 [2024-11-04 17:10:24.443629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60680 ] 00:09:23.708 { 00:09:23.708 "subsystems": [ 00:09:23.708 { 00:09:23.708 "subsystem": "bdev", 00:09:23.708 "config": [ 00:09:23.708 { 00:09:23.708 "params": { 00:09:23.708 "block_size": 512, 00:09:23.708 "num_blocks": 1048576, 00:09:23.708 "name": "malloc0" 00:09:23.708 }, 00:09:23.708 "method": "bdev_malloc_create" 00:09:23.708 }, 00:09:23.708 { 00:09:23.708 "params": { 00:09:23.708 "block_size": 512, 00:09:23.708 "num_blocks": 1048576, 00:09:23.708 "name": "malloc1" 00:09:23.708 }, 00:09:23.708 "method": "bdev_malloc_create" 00:09:23.708 }, 00:09:23.708 { 00:09:23.708 "method": "bdev_wait_for_examine" 00:09:23.708 } 00:09:23.708 ] 00:09:23.708 } 00:09:23.708 ] 00:09:23.708 } 00:09:23.967 [2024-11-04 17:10:24.589713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.967 [2024-11-04 17:10:24.647952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.967 [2024-11-04 17:10:24.702079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.374  [2024-11-04T17:10:27.115Z] Copying: 216/512 [MB] (216 MBps) [2024-11-04T17:10:27.683Z] Copying: 432/512 [MB] (215 MBps) [2024-11-04T17:10:28.251Z] Copying: 512/512 [MB] (average 216 MBps) 00:09:27.447 00:09:27.447 17:10:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:27.447 17:10:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:27.447 17:10:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:27.447 17:10:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:27.447 [2024-11-04 17:10:28.046331] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:27.447 [2024-11-04 17:10:28.046450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60728 ] 00:09:27.447 { 00:09:27.447 "subsystems": [ 00:09:27.447 { 00:09:27.447 "subsystem": "bdev", 00:09:27.447 "config": [ 00:09:27.447 { 00:09:27.447 "params": { 00:09:27.447 "block_size": 512, 00:09:27.447 "num_blocks": 1048576, 00:09:27.447 "name": "malloc0" 00:09:27.447 }, 00:09:27.447 "method": "bdev_malloc_create" 00:09:27.447 }, 00:09:27.447 { 00:09:27.447 "params": { 00:09:27.447 "block_size": 512, 00:09:27.447 "num_blocks": 1048576, 00:09:27.447 "name": "malloc1" 00:09:27.447 }, 00:09:27.447 "method": "bdev_malloc_create" 00:09:27.447 }, 00:09:27.447 { 00:09:27.447 "method": "bdev_wait_for_examine" 00:09:27.447 } 00:09:27.447 ] 00:09:27.447 } 00:09:27.447 ] 00:09:27.447 } 00:09:27.447 [2024-11-04 17:10:28.194064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.447 [2024-11-04 17:10:28.249424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.711 [2024-11-04 17:10:28.304662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.089  [2024-11-04T17:10:30.831Z] Copying: 221/512 [MB] (221 MBps) [2024-11-04T17:10:31.090Z] Copying: 452/512 [MB] (231 MBps) [2024-11-04T17:10:31.659Z] Copying: 512/512 [MB] (average 226 MBps) 00:09:30.855 00:09:30.855 00:09:30.855 real 0m7.117s 00:09:30.855 user 0m6.130s 00:09:30.855 sys 0m0.847s 00:09:30.855 17:10:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.855 ************************************ 00:09:30.855 END TEST dd_malloc_copy 00:09:30.855 ************************************ 00:09:30.855 17:10:31 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.855 00:09:30.855 real 0m7.339s 00:09:30.855 user 0m6.246s 00:09:30.855 sys 0m0.959s 00:09:30.855 17:10:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.855 17:10:31 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:30.855 ************************************ 00:09:30.855 END TEST spdk_dd_malloc 00:09:30.855 ************************************ 00:09:30.855 17:10:31 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:30.855 17:10:31 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:30.855 17:10:31 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.855 17:10:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:30.855 ************************************ 00:09:30.855 START TEST spdk_dd_bdev_to_bdev 00:09:30.855 ************************************ 00:09:30.855 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:31.114 * Looking for test storage... 00:09:31.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.114 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:31.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.115 --rc genhtml_branch_coverage=1 00:09:31.115 --rc genhtml_function_coverage=1 00:09:31.115 --rc genhtml_legend=1 00:09:31.115 --rc geninfo_all_blocks=1 00:09:31.115 --rc geninfo_unexecuted_blocks=1 00:09:31.115 00:09:31.115 ' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:31.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.115 --rc genhtml_branch_coverage=1 00:09:31.115 --rc genhtml_function_coverage=1 00:09:31.115 --rc genhtml_legend=1 00:09:31.115 --rc geninfo_all_blocks=1 00:09:31.115 --rc geninfo_unexecuted_blocks=1 00:09:31.115 00:09:31.115 ' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:31.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.115 --rc genhtml_branch_coverage=1 00:09:31.115 --rc genhtml_function_coverage=1 00:09:31.115 --rc genhtml_legend=1 00:09:31.115 --rc geninfo_all_blocks=1 00:09:31.115 --rc geninfo_unexecuted_blocks=1 00:09:31.115 00:09:31.115 ' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:31.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.115 --rc genhtml_branch_coverage=1 00:09:31.115 --rc genhtml_function_coverage=1 00:09:31.115 --rc genhtml_legend=1 00:09:31.115 --rc geninfo_all_blocks=1 00:09:31.115 --rc geninfo_unexecuted_blocks=1 00:09:31.115 00:09:31.115 ' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:31.115 ************************************ 00:09:31.115 START TEST dd_inflate_file 00:09:31.115 ************************************ 00:09:31.115 17:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:31.115 [2024-11-04 17:10:31.844231] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:31.115 [2024-11-04 17:10:31.844931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60840 ] 00:09:31.374 [2024-11-04 17:10:31.991401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.374 [2024-11-04 17:10:32.035855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.374 [2024-11-04 17:10:32.088683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.374  [2024-11-04T17:10:32.436Z] Copying: 64/64 [MB] (average 1488 MBps) 00:09:31.632 00:09:31.632 00:09:31.632 real 0m0.566s 00:09:31.632 user 0m0.323s 00:09:31.632 sys 0m0.305s 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:31.632 ************************************ 00:09:31.632 END TEST dd_inflate_file 00:09:31.632 ************************************ 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:31.632 ************************************ 00:09:31.632 START TEST dd_copy_to_out_bdev 00:09:31.632 ************************************ 00:09:31.632 17:10:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:31.891 { 00:09:31.891 "subsystems": [ 00:09:31.891 { 00:09:31.891 "subsystem": "bdev", 00:09:31.891 "config": [ 00:09:31.891 { 00:09:31.891 "params": { 00:09:31.891 "trtype": "pcie", 00:09:31.891 "traddr": "0000:00:10.0", 00:09:31.891 "name": "Nvme0" 00:09:31.891 }, 00:09:31.891 "method": "bdev_nvme_attach_controller" 00:09:31.891 }, 00:09:31.891 { 00:09:31.891 "params": { 00:09:31.891 "trtype": "pcie", 00:09:31.891 "traddr": "0000:00:11.0", 00:09:31.891 "name": "Nvme1" 00:09:31.891 }, 00:09:31.891 "method": "bdev_nvme_attach_controller" 00:09:31.891 }, 00:09:31.891 { 00:09:31.891 "method": "bdev_wait_for_examine" 00:09:31.891 } 00:09:31.891 ] 00:09:31.891 } 00:09:31.891 ] 00:09:31.891 } 00:09:31.891 [2024-11-04 17:10:32.463797] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:31.891 [2024-11-04 17:10:32.463895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60874 ] 00:09:31.891 [2024-11-04 17:10:32.611399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.891 [2024-11-04 17:10:32.678814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.150 [2024-11-04 17:10:32.734716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.528  [2024-11-04T17:10:34.332Z] Copying: 53/64 [MB] (53 MBps) [2024-11-04T17:10:34.591Z] Copying: 64/64 [MB] (average 53 MBps) 00:09:33.787 00:09:33.787 00:09:33.787 real 0m1.943s 00:09:33.787 user 0m1.716s 00:09:33.787 sys 0m1.550s 00:09:33.787 ************************************ 00:09:33.787 END TEST dd_copy_to_out_bdev 00:09:33.787 ************************************ 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:33.787 ************************************ 00:09:33.787 START TEST dd_offset_magic 00:09:33.787 ************************************ 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:33.787 17:10:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:33.787 [2024-11-04 17:10:34.469075] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:33.787 [2024-11-04 17:10:34.469176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60919 ] 00:09:33.787 { 00:09:33.787 "subsystems": [ 00:09:33.787 { 00:09:33.787 "subsystem": "bdev", 00:09:33.787 "config": [ 00:09:33.787 { 00:09:33.787 "params": { 00:09:33.787 "trtype": "pcie", 00:09:33.787 "traddr": "0000:00:10.0", 00:09:33.787 "name": "Nvme0" 00:09:33.787 }, 00:09:33.787 "method": "bdev_nvme_attach_controller" 00:09:33.787 }, 00:09:33.787 { 00:09:33.787 "params": { 00:09:33.787 "trtype": "pcie", 00:09:33.787 "traddr": "0000:00:11.0", 00:09:33.787 "name": "Nvme1" 00:09:33.787 }, 00:09:33.787 "method": "bdev_nvme_attach_controller" 00:09:33.787 }, 00:09:33.787 { 00:09:33.787 "method": "bdev_wait_for_examine" 00:09:33.787 } 00:09:33.787 ] 00:09:33.787 } 00:09:33.787 ] 00:09:33.787 } 00:09:34.045 [2024-11-04 17:10:34.615890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.045 [2024-11-04 17:10:34.667537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.045 [2024-11-04 17:10:34.723890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.304  [2024-11-04T17:10:35.367Z] Copying: 65/65 [MB] (average 890 MBps) 00:09:34.563 00:09:34.564 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:34.564 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:34.564 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:34.564 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:34.564 [2024-11-04 17:10:35.289067] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:34.564 [2024-11-04 17:10:35.289173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60939 ] 00:09:34.564 { 00:09:34.564 "subsystems": [ 00:09:34.564 { 00:09:34.564 "subsystem": "bdev", 00:09:34.564 "config": [ 00:09:34.564 { 00:09:34.564 "params": { 00:09:34.564 "trtype": "pcie", 00:09:34.564 "traddr": "0000:00:10.0", 00:09:34.564 "name": "Nvme0" 00:09:34.564 }, 00:09:34.564 "method": "bdev_nvme_attach_controller" 00:09:34.564 }, 00:09:34.564 { 00:09:34.564 "params": { 00:09:34.564 "trtype": "pcie", 00:09:34.564 "traddr": "0000:00:11.0", 00:09:34.564 "name": "Nvme1" 00:09:34.564 }, 00:09:34.564 "method": "bdev_nvme_attach_controller" 00:09:34.564 }, 00:09:34.564 { 00:09:34.564 "method": "bdev_wait_for_examine" 00:09:34.564 } 00:09:34.564 ] 00:09:34.564 } 00:09:34.564 ] 00:09:34.564 } 00:09:34.825 [2024-11-04 17:10:35.436926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.825 [2024-11-04 17:10:35.484794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.825 [2024-11-04 17:10:35.542157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.084  [2024-11-04T17:10:36.147Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:35.343 00:09:35.343 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:35.343 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:35.343 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:35.343 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:35.343 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:35.344 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:35.344 17:10:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:35.344 [2024-11-04 17:10:35.968025] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:35.344 [2024-11-04 17:10:35.968126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60955 ] 00:09:35.344 { 00:09:35.344 "subsystems": [ 00:09:35.344 { 00:09:35.344 "subsystem": "bdev", 00:09:35.344 "config": [ 00:09:35.344 { 00:09:35.344 "params": { 00:09:35.344 "trtype": "pcie", 00:09:35.344 "traddr": "0000:00:10.0", 00:09:35.344 "name": "Nvme0" 00:09:35.344 }, 00:09:35.344 "method": "bdev_nvme_attach_controller" 00:09:35.344 }, 00:09:35.344 { 00:09:35.344 "params": { 00:09:35.344 "trtype": "pcie", 00:09:35.344 "traddr": "0000:00:11.0", 00:09:35.344 "name": "Nvme1" 00:09:35.344 }, 00:09:35.344 "method": "bdev_nvme_attach_controller" 00:09:35.344 }, 00:09:35.344 { 00:09:35.344 "method": "bdev_wait_for_examine" 00:09:35.344 } 00:09:35.344 ] 00:09:35.344 } 00:09:35.344 ] 00:09:35.344 } 00:09:35.344 [2024-11-04 17:10:36.120550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.603 [2024-11-04 17:10:36.172792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.603 [2024-11-04 17:10:36.228785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.864  [2024-11-04T17:10:36.933Z] Copying: 65/65 [MB] (average 984 MBps) 00:09:36.129 00:09:36.129 17:10:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:36.129 17:10:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:36.129 17:10:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:36.129 17:10:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:36.129 [2024-11-04 17:10:36.775704] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:36.129 [2024-11-04 17:10:36.775799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60970 ] 00:09:36.129 { 00:09:36.129 "subsystems": [ 00:09:36.129 { 00:09:36.129 "subsystem": "bdev", 00:09:36.129 "config": [ 00:09:36.129 { 00:09:36.129 "params": { 00:09:36.129 "trtype": "pcie", 00:09:36.129 "traddr": "0000:00:10.0", 00:09:36.129 "name": "Nvme0" 00:09:36.129 }, 00:09:36.129 "method": "bdev_nvme_attach_controller" 00:09:36.129 }, 00:09:36.129 { 00:09:36.129 "params": { 00:09:36.129 "trtype": "pcie", 00:09:36.129 "traddr": "0000:00:11.0", 00:09:36.129 "name": "Nvme1" 00:09:36.129 }, 00:09:36.129 "method": "bdev_nvme_attach_controller" 00:09:36.129 }, 00:09:36.129 { 00:09:36.129 "method": "bdev_wait_for_examine" 00:09:36.129 } 00:09:36.129 ] 00:09:36.129 } 00:09:36.129 ] 00:09:36.129 } 00:09:36.129 [2024-11-04 17:10:36.925686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.388 [2024-11-04 17:10:36.986472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.388 [2024-11-04 17:10:37.046506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.647  [2024-11-04T17:10:37.451Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:36.647 00:09:36.647 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:36.647 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:36.647 00:09:36.647 real 0m2.999s 00:09:36.647 user 0m2.155s 00:09:36.647 sys 0m0.950s 00:09:36.647 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.647 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:36.647 ************************************ 00:09:36.647 END TEST dd_offset_magic 00:09:36.647 ************************************ 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:36.906 17:10:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:36.906 [2024-11-04 17:10:37.511165] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:36.906 [2024-11-04 17:10:37.511313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61007 ] 00:09:36.906 { 00:09:36.906 "subsystems": [ 00:09:36.906 { 00:09:36.906 "subsystem": "bdev", 00:09:36.906 "config": [ 00:09:36.906 { 00:09:36.906 "params": { 00:09:36.906 "trtype": "pcie", 00:09:36.906 "traddr": "0000:00:10.0", 00:09:36.906 "name": "Nvme0" 00:09:36.906 }, 00:09:36.906 "method": "bdev_nvme_attach_controller" 00:09:36.906 }, 00:09:36.906 { 00:09:36.906 "params": { 00:09:36.906 "trtype": "pcie", 00:09:36.906 "traddr": "0000:00:11.0", 00:09:36.906 "name": "Nvme1" 00:09:36.906 }, 00:09:36.906 "method": "bdev_nvme_attach_controller" 00:09:36.906 }, 00:09:36.906 { 00:09:36.906 "method": "bdev_wait_for_examine" 00:09:36.906 } 00:09:36.906 ] 00:09:36.906 } 00:09:36.906 ] 00:09:36.906 } 00:09:36.906 [2024-11-04 17:10:37.658331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.906 [2024-11-04 17:10:37.700924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.166 [2024-11-04 17:10:37.754281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.166  [2024-11-04T17:10:38.229Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:09:37.425 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:37.425 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:37.425 [2024-11-04 17:10:38.177826] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:37.425 [2024-11-04 17:10:38.177940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61028 ] 00:09:37.425 { 00:09:37.425 "subsystems": [ 00:09:37.425 { 00:09:37.425 "subsystem": "bdev", 00:09:37.425 "config": [ 00:09:37.425 { 00:09:37.425 "params": { 00:09:37.425 "trtype": "pcie", 00:09:37.425 "traddr": "0000:00:10.0", 00:09:37.425 "name": "Nvme0" 00:09:37.425 }, 00:09:37.425 "method": "bdev_nvme_attach_controller" 00:09:37.425 }, 00:09:37.425 { 00:09:37.425 "params": { 00:09:37.425 "trtype": "pcie", 00:09:37.425 "traddr": "0000:00:11.0", 00:09:37.425 "name": "Nvme1" 00:09:37.425 }, 00:09:37.425 "method": "bdev_nvme_attach_controller" 00:09:37.425 }, 00:09:37.425 { 00:09:37.425 "method": "bdev_wait_for_examine" 00:09:37.425 } 00:09:37.425 ] 00:09:37.425 } 00:09:37.425 ] 00:09:37.425 } 00:09:37.684 [2024-11-04 17:10:38.325262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.684 [2024-11-04 17:10:38.378479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.684 [2024-11-04 17:10:38.434854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:37.943  [2024-11-04T17:10:39.007Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:09:38.203 00:09:38.203 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:38.203 ************************************ 00:09:38.203 END TEST spdk_dd_bdev_to_bdev 00:09:38.203 ************************************ 00:09:38.203 00:09:38.203 real 0m7.257s 00:09:38.203 user 0m5.329s 00:09:38.203 sys 0m3.534s 00:09:38.203 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.203 17:10:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:38.203 17:10:38 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:38.203 17:10:38 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:38.203 17:10:38 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:38.203 17:10:38 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.203 17:10:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:38.203 ************************************ 00:09:38.203 START TEST spdk_dd_uring 00:09:38.203 ************************************ 00:09:38.203 17:10:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:38.203 * Looking for test storage... 00:09:38.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:38.203 17:10:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:38.203 17:10:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:09:38.203 17:10:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.463 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.464 --rc genhtml_branch_coverage=1 00:09:38.464 --rc genhtml_function_coverage=1 00:09:38.464 --rc genhtml_legend=1 00:09:38.464 --rc geninfo_all_blocks=1 00:09:38.464 --rc geninfo_unexecuted_blocks=1 00:09:38.464 00:09:38.464 ' 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.464 --rc genhtml_branch_coverage=1 00:09:38.464 --rc genhtml_function_coverage=1 00:09:38.464 --rc genhtml_legend=1 00:09:38.464 --rc geninfo_all_blocks=1 00:09:38.464 --rc geninfo_unexecuted_blocks=1 00:09:38.464 00:09:38.464 ' 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.464 --rc genhtml_branch_coverage=1 00:09:38.464 --rc genhtml_function_coverage=1 00:09:38.464 --rc genhtml_legend=1 00:09:38.464 --rc geninfo_all_blocks=1 00:09:38.464 --rc geninfo_unexecuted_blocks=1 00:09:38.464 00:09:38.464 ' 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:38.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.464 --rc genhtml_branch_coverage=1 00:09:38.464 --rc genhtml_function_coverage=1 00:09:38.464 --rc genhtml_legend=1 00:09:38.464 --rc geninfo_all_blocks=1 00:09:38.464 --rc geninfo_unexecuted_blocks=1 00:09:38.464 00:09:38.464 ' 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:38.464 ************************************ 00:09:38.464 START TEST dd_uring_copy 00:09:38.464 ************************************ 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:38.464 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=6kg0l0ngcpq8hc2csl4e9l9d1o4oj62ygpnwi62v30twjd4cuvzmom365jrng2qlicjj0tzek74ytrm2v6nac5k7q3kyhmduxd8at8x6v6jbfoigw5v2b49c9if4qdc439epzpjejx3tvin62wrdu30dixogz9yyog4f8ut0r6niw7jv26vhpztivcd8jiilp1efn6vura4fx6k87agm3qyt538hikb78n78bl6m34piq216d900on32ze36m1x30vblrwf7wxu2db2ca2ne6yoqwhywed4150f7ggrjwjuxz4pjviy7aqw7e601svvuvjral8aw3wstdnmstra88iruot0tpd62wg140hiqgp8jt8294b3u2dqp6ppda2gufpjbnhpq1qrfbc3366vzoerymogw5ko6qiwtkbe8tdyajw16ndnn5bo2hfru6og5kmbur4v6w32fyz9106zyqq9tocet22ajbvyao9pn15t80xwskd4zwpp58933nqngtgp7rc5a60q0sy9o3cjunmt7o40szkmg8p1yedkjh0iezcfkssltfrdddoj3xwb936f8s7wu1i2pasgr6e3jptxkx4mwursgmyiqvk0zhw3wav891wtj4638hiz09rqh6e59pp64xeggrui2bewgj2s04hn6ez692hhuhb2u8s9ibmswbyzog4b888f8oso7da0api9yrdves8zcfovqpmvpopwclwpsaddl8syddw8e67dq93sjapfztbriuguwgtxjzhdtgkcvarwhgfuoz09afo3ig3wp82m5tdigll9gs49nrgevaonteh9h497ed1lxey13q7n8pfql6f4bsssuchnylrfrca68ey1bgdflb67x2v5x9wlc0u23g2u5vg8nctb64c1sxd69mim7glm16w7f6g00n8qtktnz8deovoh3z0mksje7ddzgqpoyugqnpx2qvvnk93i2gzqm6pg9horso9na28gktx538hgv1mcnc85vp7esew0pwbsq 00:09:38.465 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 6kg0l0ngcpq8hc2csl4e9l9d1o4oj62ygpnwi62v30twjd4cuvzmom365jrng2qlicjj0tzek74ytrm2v6nac5k7q3kyhmduxd8at8x6v6jbfoigw5v2b49c9if4qdc439epzpjejx3tvin62wrdu30dixogz9yyog4f8ut0r6niw7jv26vhpztivcd8jiilp1efn6vura4fx6k87agm3qyt538hikb78n78bl6m34piq216d900on32ze36m1x30vblrwf7wxu2db2ca2ne6yoqwhywed4150f7ggrjwjuxz4pjviy7aqw7e601svvuvjral8aw3wstdnmstra88iruot0tpd62wg140hiqgp8jt8294b3u2dqp6ppda2gufpjbnhpq1qrfbc3366vzoerymogw5ko6qiwtkbe8tdyajw16ndnn5bo2hfru6og5kmbur4v6w32fyz9106zyqq9tocet22ajbvyao9pn15t80xwskd4zwpp58933nqngtgp7rc5a60q0sy9o3cjunmt7o40szkmg8p1yedkjh0iezcfkssltfrdddoj3xwb936f8s7wu1i2pasgr6e3jptxkx4mwursgmyiqvk0zhw3wav891wtj4638hiz09rqh6e59pp64xeggrui2bewgj2s04hn6ez692hhuhb2u8s9ibmswbyzog4b888f8oso7da0api9yrdves8zcfovqpmvpopwclwpsaddl8syddw8e67dq93sjapfztbriuguwgtxjzhdtgkcvarwhgfuoz09afo3ig3wp82m5tdigll9gs49nrgevaonteh9h497ed1lxey13q7n8pfql6f4bsssuchnylrfrca68ey1bgdflb67x2v5x9wlc0u23g2u5vg8nctb64c1sxd69mim7glm16w7f6g00n8qtktnz8deovoh3z0mksje7ddzgqpoyugqnpx2qvvnk93i2gzqm6pg9horso9na28gktx538hgv1mcnc85vp7esew0pwbsq 00:09:38.465 17:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:38.465 [2024-11-04 17:10:39.176099] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:38.465 [2024-11-04 17:10:39.176215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61109 ] 00:09:38.724 [2024-11-04 17:10:39.315731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.724 [2024-11-04 17:10:39.368357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.724 [2024-11-04 17:10:39.420936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.659  [2024-11-04T17:10:40.722Z] Copying: 511/511 [MB] (average 1064 MBps) 00:09:39.918 00:09:39.918 17:10:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:39.918 17:10:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:39.918 17:10:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:39.918 17:10:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:39.918 { 00:09:39.918 "subsystems": [ 00:09:39.918 { 00:09:39.918 "subsystem": "bdev", 00:09:39.918 "config": [ 00:09:39.918 { 00:09:39.918 "params": { 00:09:39.918 "block_size": 512, 00:09:39.918 "num_blocks": 1048576, 00:09:39.918 "name": "malloc0" 00:09:39.918 }, 00:09:39.918 "method": "bdev_malloc_create" 00:09:39.918 }, 00:09:39.918 { 00:09:39.918 "params": { 00:09:39.918 "filename": "/dev/zram1", 00:09:39.918 "name": "uring0" 00:09:39.918 }, 00:09:39.918 "method": "bdev_uring_create" 00:09:39.918 }, 00:09:39.918 { 00:09:39.918 "method": "bdev_wait_for_examine" 00:09:39.918 } 00:09:39.918 ] 00:09:39.918 } 00:09:39.918 ] 00:09:39.918 } 00:09:39.918 [2024-11-04 17:10:40.556560] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:39.918 [2024-11-04 17:10:40.557150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61125 ] 00:09:39.918 [2024-11-04 17:10:40.705619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.178 [2024-11-04 17:10:40.756524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.178 [2024-11-04 17:10:40.812911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.564  [2024-11-04T17:10:43.302Z] Copying: 214/512 [MB] (214 MBps) [2024-11-04T17:10:43.560Z] Copying: 427/512 [MB] (212 MBps) [2024-11-04T17:10:43.819Z] Copying: 512/512 [MB] (average 215 MBps) 00:09:43.015 00:09:43.015 17:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:43.015 17:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:43.015 17:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:43.015 17:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:43.273 [2024-11-04 17:10:43.819955] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:43.273 [2024-11-04 17:10:43.820067] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:09:43.273 { 00:09:43.273 "subsystems": [ 00:09:43.273 { 00:09:43.273 "subsystem": "bdev", 00:09:43.273 "config": [ 00:09:43.273 { 00:09:43.273 "params": { 00:09:43.273 "block_size": 512, 00:09:43.273 "num_blocks": 1048576, 00:09:43.273 "name": "malloc0" 00:09:43.273 }, 00:09:43.273 "method": "bdev_malloc_create" 00:09:43.273 }, 00:09:43.273 { 00:09:43.273 "params": { 00:09:43.273 "filename": "/dev/zram1", 00:09:43.273 "name": "uring0" 00:09:43.273 }, 00:09:43.273 "method": "bdev_uring_create" 00:09:43.273 }, 00:09:43.273 { 00:09:43.273 "method": "bdev_wait_for_examine" 00:09:43.273 } 00:09:43.273 ] 00:09:43.273 } 00:09:43.273 ] 00:09:43.273 } 00:09:43.273 [2024-11-04 17:10:43.966848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.273 [2024-11-04 17:10:44.022827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.532 [2024-11-04 17:10:44.076361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.909  [2024-11-04T17:10:46.649Z] Copying: 185/512 [MB] (185 MBps) [2024-11-04T17:10:47.590Z] Copying: 348/512 [MB] (162 MBps) [2024-11-04T17:10:47.590Z] Copying: 488/512 [MB] (140 MBps) [2024-11-04T17:10:47.862Z] Copying: 512/512 [MB] (average 163 MBps) 00:09:47.058 00:09:47.058 17:10:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:47.058 17:10:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 6kg0l0ngcpq8hc2csl4e9l9d1o4oj62ygpnwi62v30twjd4cuvzmom365jrng2qlicjj0tzek74ytrm2v6nac5k7q3kyhmduxd8at8x6v6jbfoigw5v2b49c9if4qdc439epzpjejx3tvin62wrdu30dixogz9yyog4f8ut0r6niw7jv26vhpztivcd8jiilp1efn6vura4fx6k87agm3qyt538hikb78n78bl6m34piq216d900on32ze36m1x30vblrwf7wxu2db2ca2ne6yoqwhywed4150f7ggrjwjuxz4pjviy7aqw7e601svvuvjral8aw3wstdnmstra88iruot0tpd62wg140hiqgp8jt8294b3u2dqp6ppda2gufpjbnhpq1qrfbc3366vzoerymogw5ko6qiwtkbe8tdyajw16ndnn5bo2hfru6og5kmbur4v6w32fyz9106zyqq9tocet22ajbvyao9pn15t80xwskd4zwpp58933nqngtgp7rc5a60q0sy9o3cjunmt7o40szkmg8p1yedkjh0iezcfkssltfrdddoj3xwb936f8s7wu1i2pasgr6e3jptxkx4mwursgmyiqvk0zhw3wav891wtj4638hiz09rqh6e59pp64xeggrui2bewgj2s04hn6ez692hhuhb2u8s9ibmswbyzog4b888f8oso7da0api9yrdves8zcfovqpmvpopwclwpsaddl8syddw8e67dq93sjapfztbriuguwgtxjzhdtgkcvarwhgfuoz09afo3ig3wp82m5tdigll9gs49nrgevaonteh9h497ed1lxey13q7n8pfql6f4bsssuchnylrfrca68ey1bgdflb67x2v5x9wlc0u23g2u5vg8nctb64c1sxd69mim7glm16w7f6g00n8qtktnz8deovoh3z0mksje7ddzgqpoyugqnpx2qvvnk93i2gzqm6pg9horso9na28gktx538hgv1mcnc85vp7esew0pwbsq == \6\k\g\0\l\0\n\g\c\p\q\8\h\c\2\c\s\l\4\e\9\l\9\d\1\o\4\o\j\6\2\y\g\p\n\w\i\6\2\v\3\0\t\w\j\d\4\c\u\v\z\m\o\m\3\6\5\j\r\n\g\2\q\l\i\c\j\j\0\t\z\e\k\7\4\y\t\r\m\2\v\6\n\a\c\5\k\7\q\3\k\y\h\m\d\u\x\d\8\a\t\8\x\6\v\6\j\b\f\o\i\g\w\5\v\2\b\4\9\c\9\i\f\4\q\d\c\4\3\9\e\p\z\p\j\e\j\x\3\t\v\i\n\6\2\w\r\d\u\3\0\d\i\x\o\g\z\9\y\y\o\g\4\f\8\u\t\0\r\6\n\i\w\7\j\v\2\6\v\h\p\z\t\i\v\c\d\8\j\i\i\l\p\1\e\f\n\6\v\u\r\a\4\f\x\6\k\8\7\a\g\m\3\q\y\t\5\3\8\h\i\k\b\7\8\n\7\8\b\l\6\m\3\4\p\i\q\2\1\6\d\9\0\0\o\n\3\2\z\e\3\6\m\1\x\3\0\v\b\l\r\w\f\7\w\x\u\2\d\b\2\c\a\2\n\e\6\y\o\q\w\h\y\w\e\d\4\1\5\0\f\7\g\g\r\j\w\j\u\x\z\4\p\j\v\i\y\7\a\q\w\7\e\6\0\1\s\v\v\u\v\j\r\a\l\8\a\w\3\w\s\t\d\n\m\s\t\r\a\8\8\i\r\u\o\t\0\t\p\d\6\2\w\g\1\4\0\h\i\q\g\p\8\j\t\8\2\9\4\b\3\u\2\d\q\p\6\p\p\d\a\2\g\u\f\p\j\b\n\h\p\q\1\q\r\f\b\c\3\3\6\6\v\z\o\e\r\y\m\o\g\w\5\k\o\6\q\i\w\t\k\b\e\8\t\d\y\a\j\w\1\6\n\d\n\n\5\b\o\2\h\f\r\u\6\o\g\5\k\m\b\u\r\4\v\6\w\3\2\f\y\z\9\1\0\6\z\y\q\q\9\t\o\c\e\t\2\2\a\j\b\v\y\a\o\9\p\n\1\5\t\8\0\x\w\s\k\d\4\z\w\p\p\5\8\9\3\3\n\q\n\g\t\g\p\7\r\c\5\a\6\0\q\0\s\y\9\o\3\c\j\u\n\m\t\7\o\4\0\s\z\k\m\g\8\p\1\y\e\d\k\j\h\0\i\e\z\c\f\k\s\s\l\t\f\r\d\d\d\o\j\3\x\w\b\9\3\6\f\8\s\7\w\u\1\i\2\p\a\s\g\r\6\e\3\j\p\t\x\k\x\4\m\w\u\r\s\g\m\y\i\q\v\k\0\z\h\w\3\w\a\v\8\9\1\w\t\j\4\6\3\8\h\i\z\0\9\r\q\h\6\e\5\9\p\p\6\4\x\e\g\g\r\u\i\2\b\e\w\g\j\2\s\0\4\h\n\6\e\z\6\9\2\h\h\u\h\b\2\u\8\s\9\i\b\m\s\w\b\y\z\o\g\4\b\8\8\8\f\8\o\s\o\7\d\a\0\a\p\i\9\y\r\d\v\e\s\8\z\c\f\o\v\q\p\m\v\p\o\p\w\c\l\w\p\s\a\d\d\l\8\s\y\d\d\w\8\e\6\7\d\q\9\3\s\j\a\p\f\z\t\b\r\i\u\g\u\w\g\t\x\j\z\h\d\t\g\k\c\v\a\r\w\h\g\f\u\o\z\0\9\a\f\o\3\i\g\3\w\p\8\2\m\5\t\d\i\g\l\l\9\g\s\4\9\n\r\g\e\v\a\o\n\t\e\h\9\h\4\9\7\e\d\1\l\x\e\y\1\3\q\7\n\8\p\f\q\l\6\f\4\b\s\s\s\u\c\h\n\y\l\r\f\r\c\a\6\8\e\y\1\b\g\d\f\l\b\6\7\x\2\v\5\x\9\w\l\c\0\u\2\3\g\2\u\5\v\g\8\n\c\t\b\6\4\c\1\s\x\d\6\9\m\i\m\7\g\l\m\1\6\w\7\f\6\g\0\0\n\8\q\t\k\t\n\z\8\d\e\o\v\o\h\3\z\0\m\k\s\j\e\7\d\d\z\g\q\p\o\y\u\g\q\n\p\x\2\q\v\v\n\k\9\3\i\2\g\z\q\m\6\p\g\9\h\o\r\s\o\9\n\a\2\8\g\k\t\x\5\3\8\h\g\v\1\m\c\n\c\8\5\v\p\7\e\s\e\w\0\p\w\b\s\q ]] 00:09:47.058 17:10:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:47.058 17:10:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 6kg0l0ngcpq8hc2csl4e9l9d1o4oj62ygpnwi62v30twjd4cuvzmom365jrng2qlicjj0tzek74ytrm2v6nac5k7q3kyhmduxd8at8x6v6jbfoigw5v2b49c9if4qdc439epzpjejx3tvin62wrdu30dixogz9yyog4f8ut0r6niw7jv26vhpztivcd8jiilp1efn6vura4fx6k87agm3qyt538hikb78n78bl6m34piq216d900on32ze36m1x30vblrwf7wxu2db2ca2ne6yoqwhywed4150f7ggrjwjuxz4pjviy7aqw7e601svvuvjral8aw3wstdnmstra88iruot0tpd62wg140hiqgp8jt8294b3u2dqp6ppda2gufpjbnhpq1qrfbc3366vzoerymogw5ko6qiwtkbe8tdyajw16ndnn5bo2hfru6og5kmbur4v6w32fyz9106zyqq9tocet22ajbvyao9pn15t80xwskd4zwpp58933nqngtgp7rc5a60q0sy9o3cjunmt7o40szkmg8p1yedkjh0iezcfkssltfrdddoj3xwb936f8s7wu1i2pasgr6e3jptxkx4mwursgmyiqvk0zhw3wav891wtj4638hiz09rqh6e59pp64xeggrui2bewgj2s04hn6ez692hhuhb2u8s9ibmswbyzog4b888f8oso7da0api9yrdves8zcfovqpmvpopwclwpsaddl8syddw8e67dq93sjapfztbriuguwgtxjzhdtgkcvarwhgfuoz09afo3ig3wp82m5tdigll9gs49nrgevaonteh9h497ed1lxey13q7n8pfql6f4bsssuchnylrfrca68ey1bgdflb67x2v5x9wlc0u23g2u5vg8nctb64c1sxd69mim7glm16w7f6g00n8qtktnz8deovoh3z0mksje7ddzgqpoyugqnpx2qvvnk93i2gzqm6pg9horso9na28gktx538hgv1mcnc85vp7esew0pwbsq == \6\k\g\0\l\0\n\g\c\p\q\8\h\c\2\c\s\l\4\e\9\l\9\d\1\o\4\o\j\6\2\y\g\p\n\w\i\6\2\v\3\0\t\w\j\d\4\c\u\v\z\m\o\m\3\6\5\j\r\n\g\2\q\l\i\c\j\j\0\t\z\e\k\7\4\y\t\r\m\2\v\6\n\a\c\5\k\7\q\3\k\y\h\m\d\u\x\d\8\a\t\8\x\6\v\6\j\b\f\o\i\g\w\5\v\2\b\4\9\c\9\i\f\4\q\d\c\4\3\9\e\p\z\p\j\e\j\x\3\t\v\i\n\6\2\w\r\d\u\3\0\d\i\x\o\g\z\9\y\y\o\g\4\f\8\u\t\0\r\6\n\i\w\7\j\v\2\6\v\h\p\z\t\i\v\c\d\8\j\i\i\l\p\1\e\f\n\6\v\u\r\a\4\f\x\6\k\8\7\a\g\m\3\q\y\t\5\3\8\h\i\k\b\7\8\n\7\8\b\l\6\m\3\4\p\i\q\2\1\6\d\9\0\0\o\n\3\2\z\e\3\6\m\1\x\3\0\v\b\l\r\w\f\7\w\x\u\2\d\b\2\c\a\2\n\e\6\y\o\q\w\h\y\w\e\d\4\1\5\0\f\7\g\g\r\j\w\j\u\x\z\4\p\j\v\i\y\7\a\q\w\7\e\6\0\1\s\v\v\u\v\j\r\a\l\8\a\w\3\w\s\t\d\n\m\s\t\r\a\8\8\i\r\u\o\t\0\t\p\d\6\2\w\g\1\4\0\h\i\q\g\p\8\j\t\8\2\9\4\b\3\u\2\d\q\p\6\p\p\d\a\2\g\u\f\p\j\b\n\h\p\q\1\q\r\f\b\c\3\3\6\6\v\z\o\e\r\y\m\o\g\w\5\k\o\6\q\i\w\t\k\b\e\8\t\d\y\a\j\w\1\6\n\d\n\n\5\b\o\2\h\f\r\u\6\o\g\5\k\m\b\u\r\4\v\6\w\3\2\f\y\z\9\1\0\6\z\y\q\q\9\t\o\c\e\t\2\2\a\j\b\v\y\a\o\9\p\n\1\5\t\8\0\x\w\s\k\d\4\z\w\p\p\5\8\9\3\3\n\q\n\g\t\g\p\7\r\c\5\a\6\0\q\0\s\y\9\o\3\c\j\u\n\m\t\7\o\4\0\s\z\k\m\g\8\p\1\y\e\d\k\j\h\0\i\e\z\c\f\k\s\s\l\t\f\r\d\d\d\o\j\3\x\w\b\9\3\6\f\8\s\7\w\u\1\i\2\p\a\s\g\r\6\e\3\j\p\t\x\k\x\4\m\w\u\r\s\g\m\y\i\q\v\k\0\z\h\w\3\w\a\v\8\9\1\w\t\j\4\6\3\8\h\i\z\0\9\r\q\h\6\e\5\9\p\p\6\4\x\e\g\g\r\u\i\2\b\e\w\g\j\2\s\0\4\h\n\6\e\z\6\9\2\h\h\u\h\b\2\u\8\s\9\i\b\m\s\w\b\y\z\o\g\4\b\8\8\8\f\8\o\s\o\7\d\a\0\a\p\i\9\y\r\d\v\e\s\8\z\c\f\o\v\q\p\m\v\p\o\p\w\c\l\w\p\s\a\d\d\l\8\s\y\d\d\w\8\e\6\7\d\q\9\3\s\j\a\p\f\z\t\b\r\i\u\g\u\w\g\t\x\j\z\h\d\t\g\k\c\v\a\r\w\h\g\f\u\o\z\0\9\a\f\o\3\i\g\3\w\p\8\2\m\5\t\d\i\g\l\l\9\g\s\4\9\n\r\g\e\v\a\o\n\t\e\h\9\h\4\9\7\e\d\1\l\x\e\y\1\3\q\7\n\8\p\f\q\l\6\f\4\b\s\s\s\u\c\h\n\y\l\r\f\r\c\a\6\8\e\y\1\b\g\d\f\l\b\6\7\x\2\v\5\x\9\w\l\c\0\u\2\3\g\2\u\5\v\g\8\n\c\t\b\6\4\c\1\s\x\d\6\9\m\i\m\7\g\l\m\1\6\w\7\f\6\g\0\0\n\8\q\t\k\t\n\z\8\d\e\o\v\o\h\3\z\0\m\k\s\j\e\7\d\d\z\g\q\p\o\y\u\g\q\n\p\x\2\q\v\v\n\k\9\3\i\2\g\z\q\m\6\p\g\9\h\o\r\s\o\9\n\a\2\8\g\k\t\x\5\3\8\h\g\v\1\m\c\n\c\8\5\v\p\7\e\s\e\w\0\p\w\b\s\q ]] 00:09:47.058 17:10:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:47.627 17:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:47.627 17:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:47.627 17:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:47.627 17:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:47.627 [2024-11-04 17:10:48.190135] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:47.627 [2024-11-04 17:10:48.190249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:09:47.627 { 00:09:47.627 "subsystems": [ 00:09:47.627 { 00:09:47.627 "subsystem": "bdev", 00:09:47.627 "config": [ 00:09:47.627 { 00:09:47.627 "params": { 00:09:47.627 "block_size": 512, 00:09:47.627 "num_blocks": 1048576, 00:09:47.627 "name": "malloc0" 00:09:47.627 }, 00:09:47.627 "method": "bdev_malloc_create" 00:09:47.627 }, 00:09:47.627 { 00:09:47.627 "params": { 00:09:47.627 "filename": "/dev/zram1", 00:09:47.627 "name": "uring0" 00:09:47.627 }, 00:09:47.627 "method": "bdev_uring_create" 00:09:47.627 }, 00:09:47.627 { 00:09:47.627 "method": "bdev_wait_for_examine" 00:09:47.627 } 00:09:47.627 ] 00:09:47.627 } 00:09:47.627 ] 00:09:47.627 } 00:09:47.627 [2024-11-04 17:10:48.335112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.627 [2024-11-04 17:10:48.406657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.887 [2024-11-04 17:10:48.471937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.919  [2024-11-04T17:10:51.101Z] Copying: 163/512 [MB] (163 MBps) [2024-11-04T17:10:52.040Z] Copying: 320/512 [MB] (156 MBps) [2024-11-04T17:10:52.040Z] Copying: 476/512 [MB] (155 MBps) [2024-11-04T17:10:52.300Z] Copying: 512/512 [MB] (average 158 MBps) 00:09:51.496 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:51.496 17:10:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:51.755 [2024-11-04 17:10:52.341379] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:51.755 [2024-11-04 17:10:52.341515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61310 ] 00:09:51.755 { 00:09:51.755 "subsystems": [ 00:09:51.755 { 00:09:51.755 "subsystem": "bdev", 00:09:51.755 "config": [ 00:09:51.755 { 00:09:51.755 "params": { 00:09:51.755 "block_size": 512, 00:09:51.755 "num_blocks": 1048576, 00:09:51.755 "name": "malloc0" 00:09:51.755 }, 00:09:51.755 "method": "bdev_malloc_create" 00:09:51.755 }, 00:09:51.755 { 00:09:51.755 "params": { 00:09:51.755 "filename": "/dev/zram1", 00:09:51.755 "name": "uring0" 00:09:51.755 }, 00:09:51.755 "method": "bdev_uring_create" 00:09:51.755 }, 00:09:51.755 { 00:09:51.755 "params": { 00:09:51.755 "name": "uring0" 00:09:51.755 }, 00:09:51.755 "method": "bdev_uring_delete" 00:09:51.755 }, 00:09:51.755 { 00:09:51.755 "method": "bdev_wait_for_examine" 00:09:51.755 } 00:09:51.755 ] 00:09:51.755 } 00:09:51.755 ] 00:09:51.755 } 00:09:51.755 [2024-11-04 17:10:52.487433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.755 [2024-11-04 17:10:52.534418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.014 [2024-11-04 17:10:52.588541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:52.014  [2024-11-04T17:10:53.386Z] Copying: 0/0 [B] (average 0 Bps) 00:09:52.582 00:09:52.582 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:52.582 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:52.582 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:52.582 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:09:52.582 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:52.583 17:10:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:52.583 [2024-11-04 17:10:53.243090] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:52.583 [2024-11-04 17:10:53.243192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61339 ] 00:09:52.583 { 00:09:52.583 "subsystems": [ 00:09:52.583 { 00:09:52.583 "subsystem": "bdev", 00:09:52.583 "config": [ 00:09:52.583 { 00:09:52.583 "params": { 00:09:52.583 "block_size": 512, 00:09:52.583 "num_blocks": 1048576, 00:09:52.583 "name": "malloc0" 00:09:52.583 }, 00:09:52.583 "method": "bdev_malloc_create" 00:09:52.583 }, 00:09:52.583 { 00:09:52.583 "params": { 00:09:52.583 "filename": "/dev/zram1", 00:09:52.583 "name": "uring0" 00:09:52.583 }, 00:09:52.583 "method": "bdev_uring_create" 00:09:52.583 }, 00:09:52.583 { 00:09:52.583 "params": { 00:09:52.583 "name": "uring0" 00:09:52.583 }, 00:09:52.583 "method": "bdev_uring_delete" 00:09:52.583 }, 00:09:52.583 { 00:09:52.583 "method": "bdev_wait_for_examine" 00:09:52.583 } 00:09:52.583 ] 00:09:52.583 } 00:09:52.583 ] 00:09:52.583 } 00:09:52.842 [2024-11-04 17:10:53.392583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.842 [2024-11-04 17:10:53.454442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.842 [2024-11-04 17:10:53.509203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:53.101 [2024-11-04 17:10:53.709812] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:53.101 [2024-11-04 17:10:53.709893] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:53.101 [2024-11-04 17:10:53.709920] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:53.101 [2024-11-04 17:10:53.709929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:53.360 [2024-11-04 17:10:54.063017] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:53.360 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:53.619 00:09:53.619 real 0m15.251s 00:09:53.619 user 0m10.200s 00:09:53.619 sys 0m13.209s 00:09:53.619 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.619 17:10:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:53.619 ************************************ 00:09:53.619 END TEST dd_uring_copy 00:09:53.619 ************************************ 00:09:53.619 00:09:53.619 real 0m15.498s 00:09:53.619 user 0m10.340s 00:09:53.619 sys 0m13.320s 00:09:53.619 17:10:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.619 ************************************ 00:09:53.619 END TEST spdk_dd_uring 00:09:53.619 17:10:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:53.619 ************************************ 00:09:53.879 17:10:54 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:53.879 17:10:54 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:53.879 17:10:54 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.879 17:10:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:53.879 ************************************ 00:09:53.879 START TEST spdk_dd_sparse 00:09:53.879 ************************************ 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:53.879 * Looking for test storage... 00:09:53.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.879 --rc genhtml_branch_coverage=1 00:09:53.879 --rc genhtml_function_coverage=1 00:09:53.879 --rc genhtml_legend=1 00:09:53.879 --rc geninfo_all_blocks=1 00:09:53.879 --rc geninfo_unexecuted_blocks=1 00:09:53.879 00:09:53.879 ' 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.879 --rc genhtml_branch_coverage=1 00:09:53.879 --rc genhtml_function_coverage=1 00:09:53.879 --rc genhtml_legend=1 00:09:53.879 --rc geninfo_all_blocks=1 00:09:53.879 --rc geninfo_unexecuted_blocks=1 00:09:53.879 00:09:53.879 ' 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.879 --rc genhtml_branch_coverage=1 00:09:53.879 --rc genhtml_function_coverage=1 00:09:53.879 --rc genhtml_legend=1 00:09:53.879 --rc geninfo_all_blocks=1 00:09:53.879 --rc geninfo_unexecuted_blocks=1 00:09:53.879 00:09:53.879 ' 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.879 --rc genhtml_branch_coverage=1 00:09:53.879 --rc genhtml_function_coverage=1 00:09:53.879 --rc genhtml_legend=1 00:09:53.879 --rc geninfo_all_blocks=1 00:09:53.879 --rc geninfo_unexecuted_blocks=1 00:09:53.879 00:09:53.879 ' 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.879 17:10:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:53.880 1+0 records in 00:09:53.880 1+0 records out 00:09:53.880 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00801963 s, 523 MB/s 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:53.880 1+0 records in 00:09:53.880 1+0 records out 00:09:53.880 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00558413 s, 751 MB/s 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:53.880 1+0 records in 00:09:53.880 1+0 records out 00:09:53.880 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00805412 s, 521 MB/s 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:53.880 ************************************ 00:09:53.880 START TEST dd_sparse_file_to_file 00:09:53.880 ************************************ 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:53.880 17:10:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:54.138 { 00:09:54.138 "subsystems": [ 00:09:54.138 { 00:09:54.138 "subsystem": "bdev", 00:09:54.138 "config": [ 00:09:54.138 { 00:09:54.138 "params": { 00:09:54.138 "block_size": 4096, 00:09:54.138 "filename": "dd_sparse_aio_disk", 00:09:54.138 "name": "dd_aio" 00:09:54.138 }, 00:09:54.138 "method": "bdev_aio_create" 00:09:54.138 }, 00:09:54.138 { 00:09:54.138 "params": { 00:09:54.138 "lvs_name": "dd_lvstore", 00:09:54.138 "bdev_name": "dd_aio" 00:09:54.138 }, 00:09:54.138 "method": "bdev_lvol_create_lvstore" 00:09:54.138 }, 00:09:54.138 { 00:09:54.138 "method": "bdev_wait_for_examine" 00:09:54.138 } 00:09:54.138 ] 00:09:54.138 } 00:09:54.138 ] 00:09:54.138 } 00:09:54.138 [2024-11-04 17:10:54.728264] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:54.138 [2024-11-04 17:10:54.728734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61439 ] 00:09:54.138 [2024-11-04 17:10:54.875556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.138 [2024-11-04 17:10:54.927959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.396 [2024-11-04 17:10:54.983831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.396  [2024-11-04T17:10:55.459Z] Copying: 12/36 [MB] (average 923 MBps) 00:09:54.655 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:54.655 00:09:54.655 real 0m0.646s 00:09:54.655 user 0m0.393s 00:09:54.655 sys 0m0.349s 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.655 ************************************ 00:09:54.655 END TEST dd_sparse_file_to_file 00:09:54.655 ************************************ 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:54.655 ************************************ 00:09:54.655 START TEST dd_sparse_file_to_bdev 00:09:54.655 ************************************ 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:54.655 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:54.655 { 00:09:54.655 "subsystems": [ 00:09:54.655 { 00:09:54.655 "subsystem": "bdev", 00:09:54.655 "config": [ 00:09:54.655 { 00:09:54.655 "params": { 00:09:54.655 "block_size": 4096, 00:09:54.655 "filename": "dd_sparse_aio_disk", 00:09:54.655 "name": "dd_aio" 00:09:54.655 }, 00:09:54.655 "method": "bdev_aio_create" 00:09:54.655 }, 00:09:54.655 { 00:09:54.655 "params": { 00:09:54.655 "lvs_name": "dd_lvstore", 00:09:54.655 "lvol_name": "dd_lvol", 00:09:54.655 "size_in_mib": 36, 00:09:54.655 "thin_provision": true 00:09:54.655 }, 00:09:54.655 "method": "bdev_lvol_create" 00:09:54.655 }, 00:09:54.655 { 00:09:54.655 "method": "bdev_wait_for_examine" 00:09:54.655 } 00:09:54.655 ] 00:09:54.655 } 00:09:54.655 ] 00:09:54.655 } 00:09:54.655 [2024-11-04 17:10:55.425529] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:54.655 [2024-11-04 17:10:55.425765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61481 ] 00:09:54.915 [2024-11-04 17:10:55.575367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.915 [2024-11-04 17:10:55.631067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.915 [2024-11-04 17:10:55.685694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.173  [2024-11-04T17:10:56.236Z] Copying: 12/36 [MB] (average 461 MBps) 00:09:55.432 00:09:55.432 00:09:55.432 real 0m0.630s 00:09:55.432 user 0m0.392s 00:09:55.432 sys 0m0.348s 00:09:55.432 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.432 ************************************ 00:09:55.432 END TEST dd_sparse_file_to_bdev 00:09:55.432 ************************************ 00:09:55.432 17:10:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:55.432 ************************************ 00:09:55.432 START TEST dd_sparse_bdev_to_file 00:09:55.432 ************************************ 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:55.432 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:55.432 { 00:09:55.432 "subsystems": [ 00:09:55.432 { 00:09:55.432 "subsystem": "bdev", 00:09:55.432 "config": [ 00:09:55.432 { 00:09:55.432 "params": { 00:09:55.432 "block_size": 4096, 00:09:55.432 "filename": "dd_sparse_aio_disk", 00:09:55.432 "name": "dd_aio" 00:09:55.432 }, 00:09:55.432 "method": "bdev_aio_create" 00:09:55.432 }, 00:09:55.432 { 00:09:55.432 "method": "bdev_wait_for_examine" 00:09:55.433 } 00:09:55.433 ] 00:09:55.433 } 00:09:55.433 ] 00:09:55.433 } 00:09:55.433 [2024-11-04 17:10:56.110637] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:55.433 [2024-11-04 17:10:56.110908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61519 ] 00:09:55.692 [2024-11-04 17:10:56.259612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.692 [2024-11-04 17:10:56.317963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.692 [2024-11-04 17:10:56.372504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.692  [2024-11-04T17:10:56.758Z] Copying: 12/36 [MB] (average 705 MBps) 00:09:55.954 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:55.954 00:09:55.954 real 0m0.646s 00:09:55.954 user 0m0.401s 00:09:55.954 sys 0m0.357s 00:09:55.954 ************************************ 00:09:55.954 END TEST dd_sparse_bdev_to_file 00:09:55.954 ************************************ 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:55.954 17:10:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:56.216 17:10:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:56.216 ************************************ 00:09:56.216 END TEST spdk_dd_sparse 00:09:56.217 ************************************ 00:09:56.217 00:09:56.217 real 0m2.319s 00:09:56.217 user 0m1.345s 00:09:56.217 sys 0m1.285s 00:09:56.217 17:10:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.217 17:10:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:56.217 17:10:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:56.217 17:10:56 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.217 17:10:56 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.217 17:10:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:56.217 ************************************ 00:09:56.217 START TEST spdk_dd_negative 00:09:56.217 ************************************ 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:56.217 * Looking for test storage... 00:09:56.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.217 17:10:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:56.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.217 --rc genhtml_branch_coverage=1 00:09:56.217 --rc genhtml_function_coverage=1 00:09:56.217 --rc genhtml_legend=1 00:09:56.217 --rc geninfo_all_blocks=1 00:09:56.217 --rc geninfo_unexecuted_blocks=1 00:09:56.217 00:09:56.217 ' 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.217 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.476 ************************************ 00:09:56.476 START TEST dd_invalid_arguments 00:09:56.476 ************************************ 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.476 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:56.476 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:56.476 00:09:56.476 CPU options: 00:09:56.476 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:56.476 (like [0,1,10]) 00:09:56.476 --lcores lcore to CPU mapping list. The list is in the format: 00:09:56.476 [<,lcores[@CPUs]>...] 00:09:56.476 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:56.476 Within the group, '-' is used for range separator, 00:09:56.476 ',' is used for single number separator. 00:09:56.476 '( )' can be omitted for single element group, 00:09:56.476 '@' can be omitted if cpus and lcores have the same value 00:09:56.476 --disable-cpumask-locks Disable CPU core lock files. 00:09:56.476 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:56.476 pollers in the app support interrupt mode) 00:09:56.476 -p, --main-core main (primary) core for DPDK 00:09:56.476 00:09:56.476 Configuration options: 00:09:56.476 -c, --config, --json JSON config file 00:09:56.476 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:56.476 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:56.476 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:56.476 --rpcs-allowed comma-separated list of permitted RPCS 00:09:56.476 --json-ignore-init-errors don't exit on invalid config entry 00:09:56.476 00:09:56.476 Memory options: 00:09:56.476 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:56.476 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:56.476 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:56.476 -R, --huge-unlink unlink huge files after initialization 00:09:56.476 -n, --mem-channels number of memory channels used for DPDK 00:09:56.476 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:56.476 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:56.476 --no-huge run without using hugepages 00:09:56.476 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:56.476 -i, --shm-id shared memory ID (optional) 00:09:56.476 -g, --single-file-segments force creating just one hugetlbfs file 00:09:56.476 00:09:56.476 PCI options: 00:09:56.476 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:56.476 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:56.476 -u, --no-pci disable PCI access 00:09:56.476 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:56.476 00:09:56.476 Log options: 00:09:56.476 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:56.476 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:56.476 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:56.476 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:56.476 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:09:56.476 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:09:56.476 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:09:56.476 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:09:56.476 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:09:56.476 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:09:56.476 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:56.476 --silence-noticelog disable notice level logging to stderr 00:09:56.476 00:09:56.476 Trace options: 00:09:56.476 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:56.476 setting 0 to disable trace (default 32768) 00:09:56.476 Tracepoints vary in size and can use more than one trace entry. 00:09:56.476 -e, --tpoint-group [:] 00:09:56.476 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:56.476 [2024-11-04 17:10:57.092926] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:56.476 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:09:56.476 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:09:56.476 bdev_raid, scheduler, all). 00:09:56.476 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:56.476 a tracepoint group. First tpoint inside a group can be enabled by 00:09:56.476 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:56.476 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:56.476 in /include/spdk_internal/trace_defs.h 00:09:56.476 00:09:56.476 Other options: 00:09:56.476 -h, --help show this usage 00:09:56.476 -v, --version print SPDK version 00:09:56.476 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:56.476 --env-context Opaque context for use of the env implementation 00:09:56.476 00:09:56.476 Application specific: 00:09:56.476 [--------- DD Options ---------] 00:09:56.476 --if Input file. Must specify either --if or --ib. 00:09:56.476 --ib Input bdev. Must specifier either --if or --ib 00:09:56.476 --of Output file. Must specify either --of or --ob. 00:09:56.476 --ob Output bdev. Must specify either --of or --ob. 00:09:56.476 --iflag Input file flags. 00:09:56.476 --oflag Output file flags. 00:09:56.476 --bs I/O unit size (default: 4096) 00:09:56.476 --qd Queue depth (default: 2) 00:09:56.476 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:56.476 --skip Skip this many I/O units at start of input. (default: 0) 00:09:56.476 --seek Skip this many I/O units at start of output. (default: 0) 00:09:56.476 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:56.476 --sparse Enable hole skipping in input target 00:09:56.476 Available iflag and oflag values: 00:09:56.476 append - append mode 00:09:56.476 direct - use direct I/O for data 00:09:56.476 directory - fail unless a directory 00:09:56.476 dsync - use synchronized I/O for data 00:09:56.476 noatime - do not update access time 00:09:56.476 noctty - do not assign controlling terminal from file 00:09:56.476 nofollow - do not follow symlinks 00:09:56.477 nonblock - use non-blocking I/O 00:09:56.477 sync - use synchronized I/O for data and metadata 00:09:56.477 ************************************ 00:09:56.477 END TEST dd_invalid_arguments 00:09:56.477 ************************************ 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.477 00:09:56.477 real 0m0.083s 00:09:56.477 user 0m0.052s 00:09:56.477 sys 0m0.030s 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.477 ************************************ 00:09:56.477 START TEST dd_double_input 00:09:56.477 ************************************ 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:56.477 [2024-11-04 17:10:57.222997] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.477 00:09:56.477 real 0m0.078s 00:09:56.477 user 0m0.051s 00:09:56.477 sys 0m0.026s 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.477 17:10:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:56.477 ************************************ 00:09:56.477 END TEST dd_double_input 00:09:56.477 ************************************ 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 ************************************ 00:09:56.735 START TEST dd_double_output 00:09:56.735 ************************************ 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:56.735 [2024-11-04 17:10:57.357523] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.735 00:09:56.735 real 0m0.083s 00:09:56.735 user 0m0.046s 00:09:56.735 sys 0m0.036s 00:09:56.735 ************************************ 00:09:56.735 END TEST dd_double_output 00:09:56.735 ************************************ 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 ************************************ 00:09:56.735 START TEST dd_no_input 00:09:56.735 ************************************ 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:56.735 [2024-11-04 17:10:57.491509] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.735 ************************************ 00:09:56.735 END TEST dd_no_input 00:09:56.735 ************************************ 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.735 00:09:56.735 real 0m0.083s 00:09:56.735 user 0m0.048s 00:09:56.735 sys 0m0.034s 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.735 17:10:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.993 ************************************ 00:09:56.993 START TEST dd_no_output 00:09:56.993 ************************************ 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:56.993 [2024-11-04 17:10:57.618156] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.993 00:09:56.993 real 0m0.079s 00:09:56.993 user 0m0.055s 00:09:56.993 sys 0m0.024s 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.993 ************************************ 00:09:56.993 END TEST dd_no_output 00:09:56.993 ************************************ 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.993 ************************************ 00:09:56.993 START TEST dd_wrong_blocksize 00:09:56.993 ************************************ 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:56.993 [2024-11-04 17:10:57.751224] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.993 ************************************ 00:09:56.993 END TEST dd_wrong_blocksize 00:09:56.993 ************************************ 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.993 00:09:56.993 real 0m0.077s 00:09:56.993 user 0m0.044s 00:09:56.993 sys 0m0.031s 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.993 17:10:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:57.251 ************************************ 00:09:57.251 START TEST dd_smaller_blocksize 00:09:57.251 ************************************ 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:57.251 17:10:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:57.251 [2024-11-04 17:10:57.884608] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:57.251 [2024-11-04 17:10:57.884713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61746 ] 00:09:57.251 [2024-11-04 17:10:58.037532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.509 [2024-11-04 17:10:58.103138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.509 [2024-11-04 17:10:58.163152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:57.768 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:58.025 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:58.025 [2024-11-04 17:10:58.762325] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:58.025 [2024-11-04 17:10:58.762388] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.282 [2024-11-04 17:10:58.882257] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.282 00:09:58.282 real 0m1.118s 00:09:58.282 user 0m0.407s 00:09:58.282 sys 0m0.604s 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.282 ************************************ 00:09:58.282 END TEST dd_smaller_blocksize 00:09:58.282 ************************************ 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:58.282 ************************************ 00:09:58.282 START TEST dd_invalid_count 00:09:58.282 ************************************ 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:58.282 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:58.283 17:10:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:58.283 [2024-11-04 17:10:59.050839] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:58.283 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:09:58.283 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.283 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:58.283 ************************************ 00:09:58.283 END TEST dd_invalid_count 00:09:58.283 ************************************ 00:09:58.283 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.283 00:09:58.283 real 0m0.075s 00:09:58.283 user 0m0.045s 00:09:58.283 sys 0m0.030s 00:09:58.283 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.283 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:58.541 ************************************ 00:09:58.541 START TEST dd_invalid_oflag 00:09:58.541 ************************************ 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:58.541 [2024-11-04 17:10:59.176227] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.541 00:09:58.541 real 0m0.075s 00:09:58.541 user 0m0.042s 00:09:58.541 sys 0m0.032s 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:58.541 ************************************ 00:09:58.541 END TEST dd_invalid_oflag 00:09:58.541 ************************************ 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:58.541 ************************************ 00:09:58.541 START TEST dd_invalid_iflag 00:09:58.541 ************************************ 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:58.541 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:58.541 [2024-11-04 17:10:59.322822] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.800 00:09:58.800 real 0m0.102s 00:09:58.800 user 0m0.067s 00:09:58.800 sys 0m0.034s 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.800 ************************************ 00:09:58.800 END TEST dd_invalid_iflag 00:09:58.800 ************************************ 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:58.800 ************************************ 00:09:58.800 START TEST dd_unknown_flag 00:09:58.800 ************************************ 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:58.800 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:58.800 [2024-11-04 17:10:59.464162] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:58.800 [2024-11-04 17:10:59.464450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61843 ] 00:09:59.059 [2024-11-04 17:10:59.613628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.059 [2024-11-04 17:10:59.664443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.059 [2024-11-04 17:10:59.718538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.059 [2024-11-04 17:10:59.752530] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:59.059 [2024-11-04 17:10:59.752611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.059 [2024-11-04 17:10:59.752665] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:59.059 [2024-11-04 17:10:59.752678] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.059 [2024-11-04 17:10:59.752940] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:59.059 [2024-11-04 17:10:59.752956] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.059 [2024-11-04 17:10:59.753011] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:59.059 [2024-11-04 17:10:59.753021] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:59.317 [2024-11-04 17:10:59.876937] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.317 00:09:59.317 real 0m0.549s 00:09:59.317 user 0m0.303s 00:09:59.317 sys 0m0.151s 00:09:59.317 ************************************ 00:09:59.317 END TEST dd_unknown_flag 00:09:59.317 ************************************ 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.317 17:10:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:59.317 ************************************ 00:09:59.317 START TEST dd_invalid_json 00:09:59.317 ************************************ 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:59.317 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:59.317 [2024-11-04 17:11:00.072161] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:59.317 [2024-11-04 17:11:00.072291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61872 ] 00:09:59.576 [2024-11-04 17:11:00.222625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.576 [2024-11-04 17:11:00.279529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.576 [2024-11-04 17:11:00.279603] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:59.576 [2024-11-04 17:11:00.279621] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:59.576 [2024-11-04 17:11:00.279632] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.576 [2024-11-04 17:11:00.279671] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.576 00:09:59.576 real 0m0.336s 00:09:59.576 user 0m0.169s 00:09:59.576 sys 0m0.066s 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.576 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:59.576 ************************************ 00:09:59.576 END TEST dd_invalid_json 00:09:59.576 ************************************ 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:59.901 ************************************ 00:09:59.901 START TEST dd_invalid_seek 00:09:59.901 ************************************ 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:59.901 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:59.901 { 00:09:59.901 "subsystems": [ 00:09:59.901 { 00:09:59.901 "subsystem": "bdev", 00:09:59.901 "config": [ 00:09:59.901 { 00:09:59.901 "params": { 00:09:59.901 "block_size": 512, 00:09:59.901 "num_blocks": 512, 00:09:59.901 "name": "malloc0" 00:09:59.901 }, 00:09:59.901 "method": "bdev_malloc_create" 00:09:59.901 }, 00:09:59.901 { 00:09:59.901 "params": { 00:09:59.901 "block_size": 512, 00:09:59.901 "num_blocks": 512, 00:09:59.901 "name": "malloc1" 00:09:59.901 }, 00:09:59.901 "method": "bdev_malloc_create" 00:09:59.901 }, 00:09:59.901 { 00:09:59.901 "method": "bdev_wait_for_examine" 00:09:59.901 } 00:09:59.901 ] 00:09:59.901 } 00:09:59.901 ] 00:09:59.901 } 00:09:59.901 [2024-11-04 17:11:00.457678] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:09:59.901 [2024-11-04 17:11:00.457771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61901 ] 00:09:59.901 [2024-11-04 17:11:00.607997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.901 [2024-11-04 17:11:00.664253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.161 [2024-11-04 17:11:00.720882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:00.161 [2024-11-04 17:11:00.781693] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:10:00.161 [2024-11-04 17:11:00.781753] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.161 [2024-11-04 17:11:00.900322] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:00.161 00:10:00.161 real 0m0.566s 00:10:00.161 user 0m0.356s 00:10:00.161 sys 0m0.166s 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.161 17:11:00 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:00.161 ************************************ 00:10:00.161 END TEST dd_invalid_seek 00:10:00.161 ************************************ 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 ************************************ 00:10:00.420 START TEST dd_invalid_skip 00:10:00.420 ************************************ 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:00.420 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:00.420 [2024-11-04 17:11:01.072007] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:00.421 [2024-11-04 17:11:01.072119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61935 ] 00:10:00.421 { 00:10:00.421 "subsystems": [ 00:10:00.421 { 00:10:00.421 "subsystem": "bdev", 00:10:00.421 "config": [ 00:10:00.421 { 00:10:00.421 "params": { 00:10:00.421 "block_size": 512, 00:10:00.421 "num_blocks": 512, 00:10:00.421 "name": "malloc0" 00:10:00.421 }, 00:10:00.421 "method": "bdev_malloc_create" 00:10:00.421 }, 00:10:00.421 { 00:10:00.421 "params": { 00:10:00.421 "block_size": 512, 00:10:00.421 "num_blocks": 512, 00:10:00.421 "name": "malloc1" 00:10:00.421 }, 00:10:00.421 "method": "bdev_malloc_create" 00:10:00.421 }, 00:10:00.421 { 00:10:00.421 "method": "bdev_wait_for_examine" 00:10:00.421 } 00:10:00.421 ] 00:10:00.421 } 00:10:00.421 ] 00:10:00.421 } 00:10:00.421 [2024-11-04 17:11:01.216085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.679 [2024-11-04 17:11:01.269712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.679 [2024-11-04 17:11:01.325970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:00.680 [2024-11-04 17:11:01.389723] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:10:00.680 [2024-11-04 17:11:01.389800] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.939 [2024-11-04 17:11:01.506446] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:00.939 00:10:00.939 real 0m0.552s 00:10:00.939 user 0m0.347s 00:10:00.939 sys 0m0.162s 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.939 ************************************ 00:10:00.939 END TEST dd_invalid_skip 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:00.939 ************************************ 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:00.939 ************************************ 00:10:00.939 START TEST dd_invalid_input_count 00:10:00.939 ************************************ 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:00.939 17:11:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:00.939 [2024-11-04 17:11:01.675513] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:00.939 [2024-11-04 17:11:01.675607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61973 ] 00:10:00.939 { 00:10:00.939 "subsystems": [ 00:10:00.939 { 00:10:00.939 "subsystem": "bdev", 00:10:00.939 "config": [ 00:10:00.939 { 00:10:00.939 "params": { 00:10:00.939 "block_size": 512, 00:10:00.939 "num_blocks": 512, 00:10:00.939 "name": "malloc0" 00:10:00.939 }, 00:10:00.939 "method": "bdev_malloc_create" 00:10:00.939 }, 00:10:00.939 { 00:10:00.939 "params": { 00:10:00.939 "block_size": 512, 00:10:00.939 "num_blocks": 512, 00:10:00.939 "name": "malloc1" 00:10:00.939 }, 00:10:00.939 "method": "bdev_malloc_create" 00:10:00.939 }, 00:10:00.939 { 00:10:00.939 "method": "bdev_wait_for_examine" 00:10:00.939 } 00:10:00.939 ] 00:10:00.939 } 00:10:00.939 ] 00:10:00.939 } 00:10:01.199 [2024-11-04 17:11:01.815552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.199 [2024-11-04 17:11:01.867435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.199 [2024-11-04 17:11:01.923406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.199 [2024-11-04 17:11:01.983905] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:10:01.199 [2024-11-04 17:11:01.983993] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:01.458 [2024-11-04 17:11:02.101002] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:01.458 00:10:01.458 real 0m0.533s 00:10:01.458 user 0m0.336s 00:10:01.458 sys 0m0.155s 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.458 ************************************ 00:10:01.458 END TEST dd_invalid_input_count 00:10:01.458 ************************************ 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:01.458 ************************************ 00:10:01.458 START TEST dd_invalid_output_count 00:10:01.458 ************************************ 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:01.458 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:01.459 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:01.717 { 00:10:01.717 "subsystems": [ 00:10:01.717 { 00:10:01.717 "subsystem": "bdev", 00:10:01.717 "config": [ 00:10:01.717 { 00:10:01.717 "params": { 00:10:01.718 "block_size": 512, 00:10:01.718 "num_blocks": 512, 00:10:01.718 "name": "malloc0" 00:10:01.718 }, 00:10:01.718 "method": "bdev_malloc_create" 00:10:01.718 }, 00:10:01.718 { 00:10:01.718 "method": "bdev_wait_for_examine" 00:10:01.718 } 00:10:01.718 ] 00:10:01.718 } 00:10:01.718 ] 00:10:01.718 } 00:10:01.718 [2024-11-04 17:11:02.275815] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:01.718 [2024-11-04 17:11:02.275928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62007 ] 00:10:01.718 [2024-11-04 17:11:02.422149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.718 [2024-11-04 17:11:02.474797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.976 [2024-11-04 17:11:02.529324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.976 [2024-11-04 17:11:02.583239] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:10:01.976 [2024-11-04 17:11:02.583350] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:01.976 [2024-11-04 17:11:02.713266] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:01.976 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:10:01.976 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:01.976 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:10:01.976 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:10:01.976 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:10:01.976 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:02.235 00:10:02.235 real 0m0.568s 00:10:02.235 user 0m0.354s 00:10:02.235 sys 0m0.170s 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:02.235 ************************************ 00:10:02.235 END TEST dd_invalid_output_count 00:10:02.235 ************************************ 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:02.235 ************************************ 00:10:02.235 START TEST dd_bs_not_multiple 00:10:02.235 ************************************ 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:02.235 17:11:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:02.235 [2024-11-04 17:11:02.893978] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:02.235 [2024-11-04 17:11:02.894113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:10:02.235 { 00:10:02.235 "subsystems": [ 00:10:02.235 { 00:10:02.235 "subsystem": "bdev", 00:10:02.235 "config": [ 00:10:02.235 { 00:10:02.235 "params": { 00:10:02.235 "block_size": 512, 00:10:02.235 "num_blocks": 512, 00:10:02.235 "name": "malloc0" 00:10:02.235 }, 00:10:02.235 "method": "bdev_malloc_create" 00:10:02.235 }, 00:10:02.235 { 00:10:02.235 "params": { 00:10:02.235 "block_size": 512, 00:10:02.235 "num_blocks": 512, 00:10:02.235 "name": "malloc1" 00:10:02.235 }, 00:10:02.235 "method": "bdev_malloc_create" 00:10:02.235 }, 00:10:02.235 { 00:10:02.235 "method": "bdev_wait_for_examine" 00:10:02.235 } 00:10:02.235 ] 00:10:02.235 } 00:10:02.235 ] 00:10:02.235 } 00:10:02.493 [2024-11-04 17:11:03.047103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.493 [2024-11-04 17:11:03.105616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.493 [2024-11-04 17:11:03.163233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.493 [2024-11-04 17:11:03.224553] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:10:02.493 [2024-11-04 17:11:03.224663] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:02.752 [2024-11-04 17:11:03.353342] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:02.752 00:10:02.752 real 0m0.589s 00:10:02.752 user 0m0.397s 00:10:02.752 sys 0m0.154s 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:02.752 ************************************ 00:10:02.752 END TEST dd_bs_not_multiple 00:10:02.752 ************************************ 00:10:02.752 00:10:02.752 real 0m6.649s 00:10:02.752 user 0m3.534s 00:10:02.752 sys 0m2.514s 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.752 17:11:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:02.752 ************************************ 00:10:02.752 END TEST spdk_dd_negative 00:10:02.752 ************************************ 00:10:02.752 00:10:02.752 real 1m18.081s 00:10:02.752 user 0m49.382s 00:10:02.752 sys 0m35.722s 00:10:02.752 17:11:03 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.752 17:11:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:02.752 ************************************ 00:10:02.752 END TEST spdk_dd 00:10:02.752 ************************************ 00:10:02.752 17:11:03 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:02.752 17:11:03 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:10:02.752 17:11:03 -- spdk/autotest.sh@256 -- # timing_exit lib 00:10:02.752 17:11:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.752 17:11:03 -- common/autotest_common.sh@10 -- # set +x 00:10:03.011 17:11:03 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:10:03.011 17:11:03 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:10:03.011 17:11:03 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:10:03.011 17:11:03 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:10:03.011 17:11:03 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:10:03.011 17:11:03 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:10:03.011 17:11:03 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:03.011 17:11:03 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.011 17:11:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.011 17:11:03 -- common/autotest_common.sh@10 -- # set +x 00:10:03.011 ************************************ 00:10:03.011 START TEST nvmf_tcp 00:10:03.011 ************************************ 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:03.011 * Looking for test storage... 00:10:03.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.011 17:11:03 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.011 --rc genhtml_branch_coverage=1 00:10:03.011 --rc genhtml_function_coverage=1 00:10:03.011 --rc genhtml_legend=1 00:10:03.011 --rc geninfo_all_blocks=1 00:10:03.011 --rc geninfo_unexecuted_blocks=1 00:10:03.011 00:10:03.011 ' 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.011 --rc genhtml_branch_coverage=1 00:10:03.011 --rc genhtml_function_coverage=1 00:10:03.011 --rc genhtml_legend=1 00:10:03.011 --rc geninfo_all_blocks=1 00:10:03.011 --rc geninfo_unexecuted_blocks=1 00:10:03.011 00:10:03.011 ' 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.011 --rc genhtml_branch_coverage=1 00:10:03.011 --rc genhtml_function_coverage=1 00:10:03.011 --rc genhtml_legend=1 00:10:03.011 --rc geninfo_all_blocks=1 00:10:03.011 --rc geninfo_unexecuted_blocks=1 00:10:03.011 00:10:03.011 ' 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.011 --rc genhtml_branch_coverage=1 00:10:03.011 --rc genhtml_function_coverage=1 00:10:03.011 --rc genhtml_legend=1 00:10:03.011 --rc geninfo_all_blocks=1 00:10:03.011 --rc geninfo_unexecuted_blocks=1 00:10:03.011 00:10:03.011 ' 00:10:03.011 17:11:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:03.011 17:11:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:03.011 17:11:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.011 17:11:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:03.011 ************************************ 00:10:03.011 START TEST nvmf_target_core 00:10:03.011 ************************************ 00:10:03.011 17:11:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:03.271 * Looking for test storage... 00:10:03.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:03.271 17:11:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.271 17:11:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.271 17:11:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.271 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.272 --rc genhtml_branch_coverage=1 00:10:03.272 --rc genhtml_function_coverage=1 00:10:03.272 --rc genhtml_legend=1 00:10:03.272 --rc geninfo_all_blocks=1 00:10:03.272 --rc geninfo_unexecuted_blocks=1 00:10:03.272 00:10:03.272 ' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.272 --rc genhtml_branch_coverage=1 00:10:03.272 --rc genhtml_function_coverage=1 00:10:03.272 --rc genhtml_legend=1 00:10:03.272 --rc geninfo_all_blocks=1 00:10:03.272 --rc geninfo_unexecuted_blocks=1 00:10:03.272 00:10:03.272 ' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.272 --rc genhtml_branch_coverage=1 00:10:03.272 --rc genhtml_function_coverage=1 00:10:03.272 --rc genhtml_legend=1 00:10:03.272 --rc geninfo_all_blocks=1 00:10:03.272 --rc geninfo_unexecuted_blocks=1 00:10:03.272 00:10:03.272 ' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.272 --rc genhtml_branch_coverage=1 00:10:03.272 --rc genhtml_function_coverage=1 00:10:03.272 --rc genhtml_legend=1 00:10:03.272 --rc geninfo_all_blocks=1 00:10:03.272 --rc geninfo_unexecuted_blocks=1 00:10:03.272 00:10:03.272 ' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.272 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.272 ************************************ 00:10:03.272 START TEST nvmf_host_management 00:10:03.272 ************************************ 00:10:03.272 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:03.532 * Looking for test storage... 00:10:03.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.532 --rc genhtml_branch_coverage=1 00:10:03.532 --rc genhtml_function_coverage=1 00:10:03.532 --rc genhtml_legend=1 00:10:03.532 --rc geninfo_all_blocks=1 00:10:03.532 --rc geninfo_unexecuted_blocks=1 00:10:03.532 00:10:03.532 ' 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.532 --rc genhtml_branch_coverage=1 00:10:03.532 --rc genhtml_function_coverage=1 00:10:03.532 --rc genhtml_legend=1 00:10:03.532 --rc geninfo_all_blocks=1 00:10:03.532 --rc geninfo_unexecuted_blocks=1 00:10:03.532 00:10:03.532 ' 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.532 --rc genhtml_branch_coverage=1 00:10:03.532 --rc genhtml_function_coverage=1 00:10:03.532 --rc genhtml_legend=1 00:10:03.532 --rc geninfo_all_blocks=1 00:10:03.532 --rc geninfo_unexecuted_blocks=1 00:10:03.532 00:10:03.532 ' 00:10:03.532 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.532 --rc genhtml_branch_coverage=1 00:10:03.532 --rc genhtml_function_coverage=1 00:10:03.532 --rc genhtml_legend=1 00:10:03.532 --rc geninfo_all_blocks=1 00:10:03.532 --rc geninfo_unexecuted_blocks=1 00:10:03.532 00:10:03.532 ' 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.533 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:03.533 Cannot find device "nvmf_init_br" 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:03.533 Cannot find device "nvmf_init_br2" 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:03.533 Cannot find device "nvmf_tgt_br" 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:03.533 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.792 Cannot find device "nvmf_tgt_br2" 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:03.792 Cannot find device "nvmf_init_br" 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:03.792 Cannot find device "nvmf_init_br2" 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:03.792 Cannot find device "nvmf_tgt_br" 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:03.792 Cannot find device "nvmf_tgt_br2" 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:03.792 Cannot find device "nvmf_br" 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:03.792 Cannot find device "nvmf_init_if" 00:10:03.792 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:03.793 Cannot find device "nvmf_init_if2" 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.793 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:04.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:10:04.052 00:10:04.052 --- 10.0.0.3 ping statistics --- 00:10:04.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.052 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:04.052 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:04.052 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:04.052 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:10:04.052 00:10:04.052 --- 10.0.0.4 ping statistics --- 00:10:04.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.053 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:04.053 00:10:04.053 --- 10.0.0.1 ping statistics --- 00:10:04.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.053 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:04.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:10:04.053 00:10:04.053 --- 10.0.0.2 ping statistics --- 00:10:04.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.053 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62381 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62381 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62381 ']' 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.053 17:11:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.312 [2024-11-04 17:11:04.875736] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:04.312 [2024-11-04 17:11:04.876095] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.312 [2024-11-04 17:11:05.032604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.312 [2024-11-04 17:11:05.099485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.312 [2024-11-04 17:11:05.099778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.312 [2024-11-04 17:11:05.100013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.312 [2024-11-04 17:11:05.100243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.312 [2024-11-04 17:11:05.100357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.312 [2024-11-04 17:11:05.101695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.312 [2024-11-04 17:11:05.101770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.312 [2024-11-04 17:11:05.101975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.312 [2024-11-04 17:11:05.101982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.571 [2024-11-04 17:11:05.161567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.571 [2024-11-04 17:11:05.283082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.571 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.571 Malloc0 00:10:04.571 [2024-11-04 17:11:05.361674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62427 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62427 /var/tmp/bdevperf.sock 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62427 ']' 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.829 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.830 { 00:10:04.830 "params": { 00:10:04.830 "name": "Nvme$subsystem", 00:10:04.830 "trtype": "$TEST_TRANSPORT", 00:10:04.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.830 "adrfam": "ipv4", 00:10:04.830 "trsvcid": "$NVMF_PORT", 00:10:04.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.830 "hdgst": ${hdgst:-false}, 00:10:04.830 "ddgst": ${ddgst:-false} 00:10:04.830 }, 00:10:04.830 "method": "bdev_nvme_attach_controller" 00:10:04.830 } 00:10:04.830 EOF 00:10:04.830 )") 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:04.830 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.830 "params": { 00:10:04.830 "name": "Nvme0", 00:10:04.830 "trtype": "tcp", 00:10:04.830 "traddr": "10.0.0.3", 00:10:04.830 "adrfam": "ipv4", 00:10:04.830 "trsvcid": "4420", 00:10:04.830 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:04.830 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:04.830 "hdgst": false, 00:10:04.830 "ddgst": false 00:10:04.830 }, 00:10:04.830 "method": "bdev_nvme_attach_controller" 00:10:04.830 }' 00:10:04.830 [2024-11-04 17:11:05.465325] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:04.830 [2024-11-04 17:11:05.465414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62427 ] 00:10:04.830 [2024-11-04 17:11:05.618493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.102 [2024-11-04 17:11:05.681286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.102 [2024-11-04 17:11:05.748528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.102 Running I/O for 10 seconds... 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.375 17:11:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.375 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:05.375 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:05.375 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:05.637 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=521 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 521 -ge 100 ']' 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.638 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.638 [2024-11-04 17:11:06.328876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.328921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.328961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.328972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.328983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.328992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.638 [2024-11-04 17:11:06.329628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.638 [2024-11-04 17:11:06.329645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.329979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.329995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.639 [2024-11-04 17:11:06.330375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c42d0 is same with the state(6) to be set 00:10:05.639 [2024-11-04 17:11:06.330548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:05.639 [2024-11-04 17:11:06.330565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:05.639 [2024-11-04 17:11:06.330586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:05.639 [2024-11-04 17:11:06.330605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:05.639 [2024-11-04 17:11:06.330625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.639 [2024-11-04 17:11:06.330634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c9ce0 is same with the state(6) to be set 00:10:05.639 [2024-11-04 17:11:06.331736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:05.639 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.639 task offset: 81920 on job bdev=Nvme0n1 fails 00:10:05.639 00:10:05.639 Latency(us) 00:10:05.639 [2024-11-04T17:11:06.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.639 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:05.640 Job: Nvme0n1 ended in about 0.46 seconds with error 00:10:05.640 Verification LBA range: start 0x0 length 0x400 00:10:05.640 Nvme0n1 : 0.46 1387.27 86.70 138.73 0.00 40313.41 2189.50 44326.17 00:10:05.640 [2024-11-04T17:11:06.444Z] =================================================================================================================== 00:10:05.640 [2024-11-04T17:11:06.444Z] Total : 1387.27 86.70 138.73 0.00 40313.41 2189.50 44326.17 00:10:05.640 17:11:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:05.640 [2024-11-04 17:11:06.334082] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:05.640 [2024-11-04 17:11:06.334113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c9ce0 (9): Bad file descriptor 00:10:05.640 [2024-11-04 17:11:06.343941] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62427 00:10:06.575 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62427) - No such process 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.575 { 00:10:06.575 "params": { 00:10:06.575 "name": "Nvme$subsystem", 00:10:06.575 "trtype": "$TEST_TRANSPORT", 00:10:06.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.575 "adrfam": "ipv4", 00:10:06.575 "trsvcid": "$NVMF_PORT", 00:10:06.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.575 "hdgst": ${hdgst:-false}, 00:10:06.575 "ddgst": ${ddgst:-false} 00:10:06.575 }, 00:10:06.575 "method": "bdev_nvme_attach_controller" 00:10:06.575 } 00:10:06.575 EOF 00:10:06.575 )") 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:06.575 17:11:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.575 "params": { 00:10:06.575 "name": "Nvme0", 00:10:06.575 "trtype": "tcp", 00:10:06.575 "traddr": "10.0.0.3", 00:10:06.575 "adrfam": "ipv4", 00:10:06.575 "trsvcid": "4420", 00:10:06.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:06.575 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:06.575 "hdgst": false, 00:10:06.575 "ddgst": false 00:10:06.575 }, 00:10:06.575 "method": "bdev_nvme_attach_controller" 00:10:06.575 }' 00:10:06.834 [2024-11-04 17:11:07.403564] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:06.834 [2024-11-04 17:11:07.403660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62462 ] 00:10:06.834 [2024-11-04 17:11:07.552996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.834 [2024-11-04 17:11:07.607085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.093 [2024-11-04 17:11:07.671578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.093 Running I/O for 1 seconds... 00:10:08.030 1472.00 IOPS, 92.00 MiB/s 00:10:08.030 Latency(us) 00:10:08.030 [2024-11-04T17:11:08.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.030 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:08.030 Verification LBA range: start 0x0 length 0x400 00:10:08.030 Nvme0n1 : 1.02 1507.50 94.22 0.00 0.00 41640.83 5213.09 38130.04 00:10:08.030 [2024-11-04T17:11:08.834Z] =================================================================================================================== 00:10:08.030 [2024-11-04T17:11:08.834Z] Total : 1507.50 94.22 0.00 0.00 41640.83 5213.09 38130.04 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.289 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.548 rmmod nvme_tcp 00:10:08.548 rmmod nvme_fabrics 00:10:08.548 rmmod nvme_keyring 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62381 ']' 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62381 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62381 ']' 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62381 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62381 00:10:08.548 killing process with pid 62381 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62381' 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62381 00:10:08.548 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62381 00:10:08.806 [2024-11-04 17:11:09.378151] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:08.806 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:09.064 ************************************ 00:10:09.064 END TEST nvmf_host_management 00:10:09.064 ************************************ 00:10:09.064 00:10:09.064 real 0m5.606s 00:10:09.064 user 0m19.650s 00:10:09.064 sys 0m1.565s 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.064 ************************************ 00:10:09.064 START TEST nvmf_lvol 00:10:09.064 ************************************ 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:09.064 * Looking for test storage... 00:10:09.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:09.064 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:09.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.323 --rc genhtml_branch_coverage=1 00:10:09.323 --rc genhtml_function_coverage=1 00:10:09.323 --rc genhtml_legend=1 00:10:09.323 --rc geninfo_all_blocks=1 00:10:09.323 --rc geninfo_unexecuted_blocks=1 00:10:09.323 00:10:09.323 ' 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:09.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.323 --rc genhtml_branch_coverage=1 00:10:09.323 --rc genhtml_function_coverage=1 00:10:09.323 --rc genhtml_legend=1 00:10:09.323 --rc geninfo_all_blocks=1 00:10:09.323 --rc geninfo_unexecuted_blocks=1 00:10:09.323 00:10:09.323 ' 00:10:09.323 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:09.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.324 --rc genhtml_branch_coverage=1 00:10:09.324 --rc genhtml_function_coverage=1 00:10:09.324 --rc genhtml_legend=1 00:10:09.324 --rc geninfo_all_blocks=1 00:10:09.324 --rc geninfo_unexecuted_blocks=1 00:10:09.324 00:10:09.324 ' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:09.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.324 --rc genhtml_branch_coverage=1 00:10:09.324 --rc genhtml_function_coverage=1 00:10:09.324 --rc genhtml_legend=1 00:10:09.324 --rc geninfo_all_blocks=1 00:10:09.324 --rc geninfo_unexecuted_blocks=1 00:10:09.324 00:10:09.324 ' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:09.324 Cannot find device "nvmf_init_br" 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:09.324 Cannot find device "nvmf_init_br2" 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:09.324 Cannot find device "nvmf_tgt_br" 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:09.324 17:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.324 Cannot find device "nvmf_tgt_br2" 00:10:09.324 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:09.324 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:09.324 Cannot find device "nvmf_init_br" 00:10:09.324 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:09.324 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:09.324 Cannot find device "nvmf_init_br2" 00:10:09.324 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:09.325 Cannot find device "nvmf_tgt_br" 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:09.325 Cannot find device "nvmf_tgt_br2" 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:09.325 Cannot find device "nvmf_br" 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:09.325 Cannot find device "nvmf_init_if" 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:09.325 Cannot find device "nvmf_init_if2" 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:09.325 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:09.583 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:09.583 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:09.583 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:09.584 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:09.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:09.584 00:10:09.584 --- 10.0.0.3 ping statistics --- 00:10:09.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.584 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:09.584 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:09.584 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:10:09.584 00:10:09.584 --- 10.0.0.4 ping statistics --- 00:10:09.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.584 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:09.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:09.584 00:10:09.584 --- 10.0.0.1 ping statistics --- 00:10:09.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.584 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:09.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:10:09.584 00:10:09.584 --- 10.0.0.2 ping statistics --- 00:10:09.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.584 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:09.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62730 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62730 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 62730 ']' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:09.584 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:09.842 [2024-11-04 17:11:10.442524] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:09.842 [2024-11-04 17:11:10.442618] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.842 [2024-11-04 17:11:10.597677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.100 [2024-11-04 17:11:10.666563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.100 [2024-11-04 17:11:10.666620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.100 [2024-11-04 17:11:10.666634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.100 [2024-11-04 17:11:10.666645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.100 [2024-11-04 17:11:10.666654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.100 [2024-11-04 17:11:10.667910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.100 [2024-11-04 17:11:10.668056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.100 [2024-11-04 17:11:10.668062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.100 [2024-11-04 17:11:10.727391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:10.100 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:10.100 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:10:10.100 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.100 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:10.100 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:10.100 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.100 17:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:10.357 [2024-11-04 17:11:11.071190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.358 17:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.924 17:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:10.924 17:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.182 17:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:11.182 17:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:11.440 17:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:11.699 17:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b183afd7-38c0-4256-b120-3ee2a08705da 00:10:11.699 17:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b183afd7-38c0-4256-b120-3ee2a08705da lvol 20 00:10:11.957 17:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=295912c6-4899-4c2e-a601-bc65ad91af56 00:10:11.957 17:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:12.216 17:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 295912c6-4899-4c2e-a601-bc65ad91af56 00:10:12.475 17:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:12.734 [2024-11-04 17:11:13.413941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:12.734 17:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:12.993 17:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62798 00:10:12.993 17:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:12.993 17:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:13.930 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 295912c6-4899-4c2e-a601-bc65ad91af56 MY_SNAPSHOT 00:10:14.496 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b60edb94-5c56-4b38-a81a-532430254d8b 00:10:14.496 17:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 295912c6-4899-4c2e-a601-bc65ad91af56 30 00:10:14.755 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b60edb94-5c56-4b38-a81a-532430254d8b MY_CLONE 00:10:15.013 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0f11177c-f9ad-4d0e-a582-8d0530fdd3dd 00:10:15.013 17:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0f11177c-f9ad-4d0e-a582-8d0530fdd3dd 00:10:15.599 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62798 00:10:23.720 Initializing NVMe Controllers 00:10:23.720 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:23.720 Controller IO queue size 128, less than required. 00:10:23.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.720 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:23.720 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:23.720 Initialization complete. Launching workers. 00:10:23.720 ======================================================== 00:10:23.720 Latency(us) 00:10:23.720 Device Information : IOPS MiB/s Average min max 00:10:23.720 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10898.90 42.57 11746.96 2312.66 52803.92 00:10:23.720 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10769.30 42.07 11887.17 1199.40 72808.67 00:10:23.720 ======================================================== 00:10:23.720 Total : 21668.20 84.64 11816.65 1199.40 72808.67 00:10:23.720 00:10:23.720 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:23.720 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 295912c6-4899-4c2e-a601-bc65ad91af56 00:10:23.979 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b183afd7-38c0-4256-b120-3ee2a08705da 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.238 rmmod nvme_tcp 00:10:24.238 rmmod nvme_fabrics 00:10:24.238 rmmod nvme_keyring 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62730 ']' 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62730 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 62730 ']' 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 62730 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:24.238 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62730 00:10:24.238 killing process with pid 62730 00:10:24.238 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:24.238 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:24.238 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62730' 00:10:24.238 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 62730 00:10:24.238 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 62730 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:24.497 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.756 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:24.756 ************************************ 00:10:24.757 END TEST nvmf_lvol 00:10:24.757 ************************************ 00:10:24.757 00:10:24.757 real 0m15.792s 00:10:24.757 user 1m5.031s 00:10:24.757 sys 0m4.073s 00:10:24.757 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:24.757 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.016 ************************************ 00:10:25.016 START TEST nvmf_lvs_grow 00:10:25.016 ************************************ 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:25.016 * Looking for test storage... 00:10:25.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:25.016 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:25.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.017 --rc genhtml_branch_coverage=1 00:10:25.017 --rc genhtml_function_coverage=1 00:10:25.017 --rc genhtml_legend=1 00:10:25.017 --rc geninfo_all_blocks=1 00:10:25.017 --rc geninfo_unexecuted_blocks=1 00:10:25.017 00:10:25.017 ' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:25.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.017 --rc genhtml_branch_coverage=1 00:10:25.017 --rc genhtml_function_coverage=1 00:10:25.017 --rc genhtml_legend=1 00:10:25.017 --rc geninfo_all_blocks=1 00:10:25.017 --rc geninfo_unexecuted_blocks=1 00:10:25.017 00:10:25.017 ' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:25.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.017 --rc genhtml_branch_coverage=1 00:10:25.017 --rc genhtml_function_coverage=1 00:10:25.017 --rc genhtml_legend=1 00:10:25.017 --rc geninfo_all_blocks=1 00:10:25.017 --rc geninfo_unexecuted_blocks=1 00:10:25.017 00:10:25.017 ' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:25.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.017 --rc genhtml_branch_coverage=1 00:10:25.017 --rc genhtml_function_coverage=1 00:10:25.017 --rc genhtml_legend=1 00:10:25.017 --rc geninfo_all_blocks=1 00:10:25.017 --rc geninfo_unexecuted_blocks=1 00:10:25.017 00:10:25.017 ' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.017 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.017 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:25.018 Cannot find device "nvmf_init_br" 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:25.018 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:25.018 Cannot find device "nvmf_init_br2" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:25.277 Cannot find device "nvmf_tgt_br" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.277 Cannot find device "nvmf_tgt_br2" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:25.277 Cannot find device "nvmf_init_br" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:25.277 Cannot find device "nvmf_init_br2" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:25.277 Cannot find device "nvmf_tgt_br" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:25.277 Cannot find device "nvmf_tgt_br2" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:25.277 Cannot find device "nvmf_br" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:25.277 Cannot find device "nvmf_init_if" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:25.277 Cannot find device "nvmf_init_if2" 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:25.277 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.277 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:25.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:25.536 00:10:25.536 --- 10.0.0.3 ping statistics --- 00:10:25.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.536 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:25.536 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:25.536 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:10:25.536 00:10:25.536 --- 10.0.0.4 ping statistics --- 00:10:25.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.536 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:25.536 00:10:25.536 --- 10.0.0.1 ping statistics --- 00:10:25.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.536 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:25.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:25.536 00:10:25.536 --- 10.0.0.2 ping statistics --- 00:10:25.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.536 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63180 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63180 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 63180 ']' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:25.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:25.536 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.536 [2024-11-04 17:11:26.236977] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:25.536 [2024-11-04 17:11:26.237088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.795 [2024-11-04 17:11:26.389513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.795 [2024-11-04 17:11:26.449648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.796 [2024-11-04 17:11:26.449706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.796 [2024-11-04 17:11:26.449718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.796 [2024-11-04 17:11:26.449726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.796 [2024-11-04 17:11:26.449734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.796 [2024-11-04 17:11:26.450175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.796 [2024-11-04 17:11:26.507509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.732 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:26.732 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:10:26.732 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.732 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:26.732 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.733 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.733 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:26.991 [2024-11-04 17:11:27.591002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.991 ************************************ 00:10:26.991 START TEST lvs_grow_clean 00:10:26.991 ************************************ 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.991 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:27.250 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:27.250 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:27.509 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:27.509 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:27.509 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:27.768 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:27.768 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:27.768 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 lvol 150 00:10:28.027 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe 00:10:28.027 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:28.027 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:28.286 [2024-11-04 17:11:28.992158] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:28.286 [2024-11-04 17:11:28.992317] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:28.286 true 00:10:28.286 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:28.286 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:28.562 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:28.562 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:28.870 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe 00:10:29.128 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:29.386 [2024-11-04 17:11:30.032806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:29.386 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63269 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63269 /var/tmp/bdevperf.sock 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 63269 ']' 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.645 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:29.645 [2024-11-04 17:11:30.349742] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:29.645 [2024-11-04 17:11:30.349864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:10:29.903 [2024-11-04 17:11:30.491513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.903 [2024-11-04 17:11:30.544543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.903 [2024-11-04 17:11:30.602743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:29.903 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.903 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:10:29.903 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:30.470 Nvme0n1 00:10:30.470 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:30.729 [ 00:10:30.729 { 00:10:30.729 "name": "Nvme0n1", 00:10:30.729 "aliases": [ 00:10:30.729 "043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe" 00:10:30.729 ], 00:10:30.729 "product_name": "NVMe disk", 00:10:30.729 "block_size": 4096, 00:10:30.729 "num_blocks": 38912, 00:10:30.729 "uuid": "043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe", 00:10:30.729 "numa_id": -1, 00:10:30.729 "assigned_rate_limits": { 00:10:30.729 "rw_ios_per_sec": 0, 00:10:30.729 "rw_mbytes_per_sec": 0, 00:10:30.729 "r_mbytes_per_sec": 0, 00:10:30.729 "w_mbytes_per_sec": 0 00:10:30.729 }, 00:10:30.729 "claimed": false, 00:10:30.729 "zoned": false, 00:10:30.729 "supported_io_types": { 00:10:30.729 "read": true, 00:10:30.729 "write": true, 00:10:30.729 "unmap": true, 00:10:30.729 "flush": true, 00:10:30.729 "reset": true, 00:10:30.729 "nvme_admin": true, 00:10:30.729 "nvme_io": true, 00:10:30.729 "nvme_io_md": false, 00:10:30.729 "write_zeroes": true, 00:10:30.729 "zcopy": false, 00:10:30.729 "get_zone_info": false, 00:10:30.729 "zone_management": false, 00:10:30.729 "zone_append": false, 00:10:30.729 "compare": true, 00:10:30.729 "compare_and_write": true, 00:10:30.729 "abort": true, 00:10:30.729 "seek_hole": false, 00:10:30.729 "seek_data": false, 00:10:30.729 "copy": true, 00:10:30.729 "nvme_iov_md": false 00:10:30.729 }, 00:10:30.729 "memory_domains": [ 00:10:30.729 { 00:10:30.729 "dma_device_id": "system", 00:10:30.729 "dma_device_type": 1 00:10:30.729 } 00:10:30.729 ], 00:10:30.729 "driver_specific": { 00:10:30.729 "nvme": [ 00:10:30.729 { 00:10:30.729 "trid": { 00:10:30.729 "trtype": "TCP", 00:10:30.729 "adrfam": "IPv4", 00:10:30.729 "traddr": "10.0.0.3", 00:10:30.729 "trsvcid": "4420", 00:10:30.729 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:30.729 }, 00:10:30.729 "ctrlr_data": { 00:10:30.729 "cntlid": 1, 00:10:30.729 "vendor_id": "0x8086", 00:10:30.729 "model_number": "SPDK bdev Controller", 00:10:30.729 "serial_number": "SPDK0", 00:10:30.729 "firmware_revision": "25.01", 00:10:30.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:30.729 "oacs": { 00:10:30.729 "security": 0, 00:10:30.729 "format": 0, 00:10:30.729 "firmware": 0, 00:10:30.729 "ns_manage": 0 00:10:30.729 }, 00:10:30.729 "multi_ctrlr": true, 00:10:30.729 "ana_reporting": false 00:10:30.729 }, 00:10:30.729 "vs": { 00:10:30.729 "nvme_version": "1.3" 00:10:30.729 }, 00:10:30.729 "ns_data": { 00:10:30.729 "id": 1, 00:10:30.729 "can_share": true 00:10:30.729 } 00:10:30.729 } 00:10:30.729 ], 00:10:30.729 "mp_policy": "active_passive" 00:10:30.730 } 00:10:30.730 } 00:10:30.730 ] 00:10:30.730 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:30.730 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63285 00:10:30.730 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:30.730 Running I/O for 10 seconds... 00:10:31.665 Latency(us) 00:10:31.665 [2024-11-04T17:11:32.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.665 Nvme0n1 : 1.00 6750.00 26.37 0.00 0.00 0.00 0.00 0.00 00:10:31.665 [2024-11-04T17:11:32.469Z] =================================================================================================================== 00:10:31.665 [2024-11-04T17:11:32.469Z] Total : 6750.00 26.37 0.00 0.00 0.00 0.00 0.00 00:10:31.665 00:10:32.602 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:32.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.860 Nvme0n1 : 2.00 6677.00 26.08 0.00 0.00 0.00 0.00 0.00 00:10:32.860 [2024-11-04T17:11:33.664Z] =================================================================================================================== 00:10:32.860 [2024-11-04T17:11:33.664Z] Total : 6677.00 26.08 0.00 0.00 0.00 0.00 0.00 00:10:32.860 00:10:32.860 true 00:10:33.119 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:33.119 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:33.378 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:33.378 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:33.378 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63285 00:10:33.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.637 Nvme0n1 : 3.00 6652.67 25.99 0.00 0.00 0.00 0.00 0.00 00:10:33.637 [2024-11-04T17:11:34.441Z] =================================================================================================================== 00:10:33.637 [2024-11-04T17:11:34.441Z] Total : 6652.67 25.99 0.00 0.00 0.00 0.00 0.00 00:10:33.637 00:10:35.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.013 Nvme0n1 : 4.00 6672.25 26.06 0.00 0.00 0.00 0.00 0.00 00:10:35.013 [2024-11-04T17:11:35.817Z] =================================================================================================================== 00:10:35.013 [2024-11-04T17:11:35.817Z] Total : 6672.25 26.06 0.00 0.00 0.00 0.00 0.00 00:10:35.013 00:10:35.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.948 Nvme0n1 : 5.00 6684.00 26.11 0.00 0.00 0.00 0.00 0.00 00:10:35.948 [2024-11-04T17:11:36.752Z] =================================================================================================================== 00:10:35.948 [2024-11-04T17:11:36.752Z] Total : 6684.00 26.11 0.00 0.00 0.00 0.00 0.00 00:10:35.948 00:10:36.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.884 Nvme0n1 : 6.00 6627.50 25.89 0.00 0.00 0.00 0.00 0.00 00:10:36.884 [2024-11-04T17:11:37.688Z] =================================================================================================================== 00:10:36.884 [2024-11-04T17:11:37.688Z] Total : 6627.50 25.89 0.00 0.00 0.00 0.00 0.00 00:10:36.884 00:10:37.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.820 Nvme0n1 : 7.00 6624.14 25.88 0.00 0.00 0.00 0.00 0.00 00:10:37.820 [2024-11-04T17:11:38.624Z] =================================================================================================================== 00:10:37.820 [2024-11-04T17:11:38.624Z] Total : 6624.14 25.88 0.00 0.00 0.00 0.00 0.00 00:10:37.820 00:10:38.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.754 Nvme0n1 : 8.00 6621.62 25.87 0.00 0.00 0.00 0.00 0.00 00:10:38.754 [2024-11-04T17:11:39.558Z] =================================================================================================================== 00:10:38.754 [2024-11-04T17:11:39.558Z] Total : 6621.62 25.87 0.00 0.00 0.00 0.00 0.00 00:10:38.754 00:10:39.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.697 Nvme0n1 : 9.00 6605.56 25.80 0.00 0.00 0.00 0.00 0.00 00:10:39.697 [2024-11-04T17:11:40.501Z] =================================================================================================================== 00:10:39.697 [2024-11-04T17:11:40.501Z] Total : 6605.56 25.80 0.00 0.00 0.00 0.00 0.00 00:10:39.697 00:10:40.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.633 Nvme0n1 : 10.00 6592.70 25.75 0.00 0.00 0.00 0.00 0.00 00:10:40.633 [2024-11-04T17:11:41.437Z] =================================================================================================================== 00:10:40.633 [2024-11-04T17:11:41.437Z] Total : 6592.70 25.75 0.00 0.00 0.00 0.00 0.00 00:10:40.633 00:10:40.633 00:10:40.633 Latency(us) 00:10:40.633 [2024-11-04T17:11:41.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.633 Nvme0n1 : 10.00 6590.22 25.74 0.00 0.00 19415.52 4408.79 109147.23 00:10:40.633 [2024-11-04T17:11:41.437Z] =================================================================================================================== 00:10:40.633 [2024-11-04T17:11:41.437Z] Total : 6590.22 25.74 0.00 0.00 19415.52 4408.79 109147.23 00:10:40.633 { 00:10:40.633 "results": [ 00:10:40.633 { 00:10:40.633 "job": "Nvme0n1", 00:10:40.633 "core_mask": "0x2", 00:10:40.633 "workload": "randwrite", 00:10:40.633 "status": "finished", 00:10:40.633 "queue_depth": 128, 00:10:40.633 "io_size": 4096, 00:10:40.633 "runtime": 10.003922, 00:10:40.633 "iops": 6590.215317552456, 00:10:40.633 "mibps": 25.743028584189283, 00:10:40.633 "io_failed": 0, 00:10:40.633 "io_timeout": 0, 00:10:40.633 "avg_latency_us": 19415.51759782021, 00:10:40.633 "min_latency_us": 4408.785454545455, 00:10:40.633 "max_latency_us": 109147.2290909091 00:10:40.633 } 00:10:40.633 ], 00:10:40.633 "core_count": 1 00:10:40.633 } 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63269 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 63269 ']' 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 63269 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63269 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:40.892 killing process with pid 63269 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63269' 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 63269 00:10:40.892 Received shutdown signal, test time was about 10.000000 seconds 00:10:40.892 00:10:40.892 Latency(us) 00:10:40.892 [2024-11-04T17:11:41.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.892 [2024-11-04T17:11:41.696Z] =================================================================================================================== 00:10:40.892 [2024-11-04T17:11:41.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 63269 00:10:40.892 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:41.151 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:41.719 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:41.719 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:41.719 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:41.719 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:41.719 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:41.981 [2024-11-04 17:11:42.739830] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:41.981 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:41.981 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:41.981 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:41.982 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:42.561 request: 00:10:42.561 { 00:10:42.561 "uuid": "bd1d3ee2-3a89-4950-a57f-704b06e1d170", 00:10:42.561 "method": "bdev_lvol_get_lvstores", 00:10:42.561 "req_id": 1 00:10:42.561 } 00:10:42.561 Got JSON-RPC error response 00:10:42.561 response: 00:10:42.561 { 00:10:42.561 "code": -19, 00:10:42.561 "message": "No such device" 00:10:42.561 } 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:42.561 aio_bdev 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.561 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:42.820 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe -t 2000 00:10:43.079 [ 00:10:43.079 { 00:10:43.079 "name": "043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe", 00:10:43.079 "aliases": [ 00:10:43.079 "lvs/lvol" 00:10:43.079 ], 00:10:43.079 "product_name": "Logical Volume", 00:10:43.079 "block_size": 4096, 00:10:43.079 "num_blocks": 38912, 00:10:43.079 "uuid": "043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe", 00:10:43.079 "assigned_rate_limits": { 00:10:43.079 "rw_ios_per_sec": 0, 00:10:43.079 "rw_mbytes_per_sec": 0, 00:10:43.079 "r_mbytes_per_sec": 0, 00:10:43.079 "w_mbytes_per_sec": 0 00:10:43.079 }, 00:10:43.079 "claimed": false, 00:10:43.079 "zoned": false, 00:10:43.079 "supported_io_types": { 00:10:43.079 "read": true, 00:10:43.079 "write": true, 00:10:43.079 "unmap": true, 00:10:43.079 "flush": false, 00:10:43.079 "reset": true, 00:10:43.079 "nvme_admin": false, 00:10:43.079 "nvme_io": false, 00:10:43.079 "nvme_io_md": false, 00:10:43.079 "write_zeroes": true, 00:10:43.079 "zcopy": false, 00:10:43.079 "get_zone_info": false, 00:10:43.079 "zone_management": false, 00:10:43.079 "zone_append": false, 00:10:43.079 "compare": false, 00:10:43.079 "compare_and_write": false, 00:10:43.079 "abort": false, 00:10:43.079 "seek_hole": true, 00:10:43.079 "seek_data": true, 00:10:43.079 "copy": false, 00:10:43.079 "nvme_iov_md": false 00:10:43.079 }, 00:10:43.079 "driver_specific": { 00:10:43.079 "lvol": { 00:10:43.079 "lvol_store_uuid": "bd1d3ee2-3a89-4950-a57f-704b06e1d170", 00:10:43.079 "base_bdev": "aio_bdev", 00:10:43.079 "thin_provision": false, 00:10:43.079 "num_allocated_clusters": 38, 00:10:43.079 "snapshot": false, 00:10:43.079 "clone": false, 00:10:43.079 "esnap_clone": false 00:10:43.079 } 00:10:43.079 } 00:10:43.079 } 00:10:43.079 ] 00:10:43.079 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:10:43.079 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:43.080 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:43.648 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:43.648 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:43.648 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:43.648 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:43.648 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 043f07d6-2fdb-49cd-96ba-3fbda0f9d3fe 00:10:43.906 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bd1d3ee2-3a89-4950-a57f-704b06e1d170 00:10:44.165 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:44.424 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.993 00:10:44.993 real 0m17.936s 00:10:44.993 user 0m16.891s 00:10:44.993 sys 0m2.462s 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:44.993 ************************************ 00:10:44.993 END TEST lvs_grow_clean 00:10:44.993 ************************************ 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:44.993 ************************************ 00:10:44.993 START TEST lvs_grow_dirty 00:10:44.993 ************************************ 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:44.993 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:45.252 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:45.252 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:45.511 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=62bad3f8-8377-485e-8cae-697654c5e09c 00:10:45.511 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:10:45.511 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:45.770 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:45.770 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:45.770 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62bad3f8-8377-485e-8cae-697654c5e09c lvol 150 00:10:46.029 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6d0a168e-6471-4a6b-a666-96195b9a163f 00:10:46.029 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:46.029 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:46.289 [2024-11-04 17:11:46.947706] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:46.289 [2024-11-04 17:11:46.947800] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:46.289 true 00:10:46.289 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:10:46.289 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:46.548 17:11:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:46.548 17:11:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:46.808 17:11:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d0a168e-6471-4a6b-a666-96195b9a163f 00:10:47.067 17:11:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:47.351 [2024-11-04 17:11:48.016310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:47.351 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:47.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63531 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63531 /var/tmp/bdevperf.sock 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63531 ']' 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:47.628 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:47.628 [2024-11-04 17:11:48.385690] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:10:47.628 [2024-11-04 17:11:48.386115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63531 ] 00:10:47.888 [2024-11-04 17:11:48.527189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.888 [2024-11-04 17:11:48.587817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.888 [2024-11-04 17:11:48.644746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.824 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:48.824 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:10:48.824 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:49.083 Nvme0n1 00:10:49.083 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:49.342 [ 00:10:49.342 { 00:10:49.342 "name": "Nvme0n1", 00:10:49.342 "aliases": [ 00:10:49.342 "6d0a168e-6471-4a6b-a666-96195b9a163f" 00:10:49.342 ], 00:10:49.342 "product_name": "NVMe disk", 00:10:49.342 "block_size": 4096, 00:10:49.342 "num_blocks": 38912, 00:10:49.342 "uuid": "6d0a168e-6471-4a6b-a666-96195b9a163f", 00:10:49.342 "numa_id": -1, 00:10:49.342 "assigned_rate_limits": { 00:10:49.342 "rw_ios_per_sec": 0, 00:10:49.342 "rw_mbytes_per_sec": 0, 00:10:49.342 "r_mbytes_per_sec": 0, 00:10:49.342 "w_mbytes_per_sec": 0 00:10:49.342 }, 00:10:49.342 "claimed": false, 00:10:49.342 "zoned": false, 00:10:49.342 "supported_io_types": { 00:10:49.342 "read": true, 00:10:49.342 "write": true, 00:10:49.342 "unmap": true, 00:10:49.342 "flush": true, 00:10:49.342 "reset": true, 00:10:49.342 "nvme_admin": true, 00:10:49.342 "nvme_io": true, 00:10:49.342 "nvme_io_md": false, 00:10:49.342 "write_zeroes": true, 00:10:49.342 "zcopy": false, 00:10:49.342 "get_zone_info": false, 00:10:49.342 "zone_management": false, 00:10:49.342 "zone_append": false, 00:10:49.342 "compare": true, 00:10:49.342 "compare_and_write": true, 00:10:49.342 "abort": true, 00:10:49.342 "seek_hole": false, 00:10:49.342 "seek_data": false, 00:10:49.342 "copy": true, 00:10:49.342 "nvme_iov_md": false 00:10:49.342 }, 00:10:49.342 "memory_domains": [ 00:10:49.342 { 00:10:49.342 "dma_device_id": "system", 00:10:49.342 "dma_device_type": 1 00:10:49.342 } 00:10:49.342 ], 00:10:49.342 "driver_specific": { 00:10:49.342 "nvme": [ 00:10:49.342 { 00:10:49.342 "trid": { 00:10:49.342 "trtype": "TCP", 00:10:49.342 "adrfam": "IPv4", 00:10:49.342 "traddr": "10.0.0.3", 00:10:49.342 "trsvcid": "4420", 00:10:49.342 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:49.342 }, 00:10:49.342 "ctrlr_data": { 00:10:49.342 "cntlid": 1, 00:10:49.342 "vendor_id": "0x8086", 00:10:49.342 "model_number": "SPDK bdev Controller", 00:10:49.342 "serial_number": "SPDK0", 00:10:49.342 "firmware_revision": "25.01", 00:10:49.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:49.342 "oacs": { 00:10:49.342 "security": 0, 00:10:49.342 "format": 0, 00:10:49.342 "firmware": 0, 00:10:49.342 "ns_manage": 0 00:10:49.342 }, 00:10:49.342 "multi_ctrlr": true, 00:10:49.342 "ana_reporting": false 00:10:49.342 }, 00:10:49.342 "vs": { 00:10:49.342 "nvme_version": "1.3" 00:10:49.342 }, 00:10:49.342 "ns_data": { 00:10:49.342 "id": 1, 00:10:49.343 "can_share": true 00:10:49.343 } 00:10:49.343 } 00:10:49.343 ], 00:10:49.343 "mp_policy": "active_passive" 00:10:49.343 } 00:10:49.343 } 00:10:49.343 ] 00:10:49.343 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63560 00:10:49.343 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:49.343 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:49.343 Running I/O for 10 seconds... 00:10:50.721 Latency(us) 00:10:50.721 [2024-11-04T17:11:51.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.721 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:50.721 [2024-11-04T17:11:51.525Z] =================================================================================================================== 00:10:50.721 [2024-11-04T17:11:51.525Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:10:50.721 00:10:51.288 17:11:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:10:51.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.288 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:10:51.288 [2024-11-04T17:11:52.092Z] =================================================================================================================== 00:10:51.288 [2024-11-04T17:11:52.092Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:10:51.288 00:10:51.546 true 00:10:51.546 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:10:51.547 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:51.805 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:51.805 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:51.805 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63560 00:10:52.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.373 Nvme0n1 : 3.00 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:10:52.373 [2024-11-04T17:11:53.177Z] =================================================================================================================== 00:10:52.373 [2024-11-04T17:11:53.177Z] Total : 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:10:52.373 00:10:53.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.325 Nvme0n1 : 4.00 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:10:53.325 [2024-11-04T17:11:54.129Z] =================================================================================================================== 00:10:53.325 [2024-11-04T17:11:54.129Z] Total : 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:10:53.325 00:10:54.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.712 Nvme0n1 : 5.00 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:10:54.712 [2024-11-04T17:11:55.516Z] =================================================================================================================== 00:10:54.712 [2024-11-04T17:11:55.516Z] Total : 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:10:54.712 00:10:55.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.281 Nvme0n1 : 6.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:10:55.281 [2024-11-04T17:11:56.085Z] =================================================================================================================== 00:10:55.281 [2024-11-04T17:11:56.085Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:10:55.281 00:10:56.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.658 Nvme0n1 : 7.00 6606.71 25.81 0.00 0.00 0.00 0.00 0.00 00:10:56.658 [2024-11-04T17:11:57.462Z] =================================================================================================================== 00:10:56.658 [2024-11-04T17:11:57.462Z] Total : 6606.71 25.81 0.00 0.00 0.00 0.00 0.00 00:10:56.658 00:10:57.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.595 Nvme0n1 : 8.00 6590.50 25.74 0.00 0.00 0.00 0.00 0.00 00:10:57.595 [2024-11-04T17:11:58.399Z] =================================================================================================================== 00:10:57.595 [2024-11-04T17:11:58.399Z] Total : 6590.50 25.74 0.00 0.00 0.00 0.00 0.00 00:10:57.595 00:10:58.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.531 Nvme0n1 : 9.00 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:10:58.531 [2024-11-04T17:11:59.335Z] =================================================================================================================== 00:10:58.531 [2024-11-04T17:11:59.335Z] Total : 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:10:58.531 00:10:59.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.470 Nvme0n1 : 10.00 6580.50 25.71 0.00 0.00 0.00 0.00 0.00 00:10:59.470 [2024-11-04T17:12:00.274Z] =================================================================================================================== 00:10:59.470 [2024-11-04T17:12:00.274Z] Total : 6580.50 25.71 0.00 0.00 0.00 0.00 0.00 00:10:59.470 00:10:59.470 00:10:59.470 Latency(us) 00:10:59.470 [2024-11-04T17:12:00.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.470 Nvme0n1 : 10.02 6579.50 25.70 0.00 0.00 19449.43 8996.31 232593.22 00:10:59.470 [2024-11-04T17:12:00.274Z] =================================================================================================================== 00:10:59.470 [2024-11-04T17:12:00.274Z] Total : 6579.50 25.70 0.00 0.00 19449.43 8996.31 232593.22 00:10:59.470 { 00:10:59.470 "results": [ 00:10:59.470 { 00:10:59.470 "job": "Nvme0n1", 00:10:59.470 "core_mask": "0x2", 00:10:59.470 "workload": "randwrite", 00:10:59.470 "status": "finished", 00:10:59.470 "queue_depth": 128, 00:10:59.470 "io_size": 4096, 00:10:59.470 "runtime": 10.020971, 00:10:59.470 "iops": 6579.502126091374, 00:10:59.470 "mibps": 25.701180180044428, 00:10:59.470 "io_failed": 0, 00:10:59.470 "io_timeout": 0, 00:10:59.470 "avg_latency_us": 19449.432229025886, 00:10:59.470 "min_latency_us": 8996.305454545454, 00:10:59.470 "max_latency_us": 232593.22181818183 00:10:59.470 } 00:10:59.470 ], 00:10:59.470 "core_count": 1 00:10:59.470 } 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63531 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63531 ']' 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63531 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63531 00:10:59.470 killing process with pid 63531 00:10:59.470 Received shutdown signal, test time was about 10.000000 seconds 00:10:59.470 00:10:59.470 Latency(us) 00:10:59.470 [2024-11-04T17:12:00.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.470 [2024-11-04T17:12:00.274Z] =================================================================================================================== 00:10:59.470 [2024-11-04T17:12:00.274Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63531' 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63531 00:10:59.470 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63531 00:10:59.730 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:59.989 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:00.248 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:00.248 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:00.506 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:00.506 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:00.506 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63180 00:11:00.506 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63180 00:11:00.506 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63180 Killed "${NVMF_APP[@]}" "$@" 00:11:00.506 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:00.506 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63693 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63693 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63693 ']' 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:00.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:00.507 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:00.507 [2024-11-04 17:12:01.269779] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:00.507 [2024-11-04 17:12:01.269892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.766 [2024-11-04 17:12:01.421627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.766 [2024-11-04 17:12:01.481875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.766 [2024-11-04 17:12:01.481959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.766 [2024-11-04 17:12:01.481986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.766 [2024-11-04 17:12:01.481994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.766 [2024-11-04 17:12:01.482001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.766 [2024-11-04 17:12:01.482401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.766 [2024-11-04 17:12:01.539718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.703 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.703 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:01.703 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.703 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.703 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.703 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.703 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:01.967 [2024-11-04 17:12:02.538389] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:01.967 [2024-11-04 17:12:02.539379] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:01.967 [2024-11-04 17:12:02.539698] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6d0a168e-6471-4a6b-a666-96195b9a163f 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6d0a168e-6471-4a6b-a666-96195b9a163f 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:01.967 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:02.226 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d0a168e-6471-4a6b-a666-96195b9a163f -t 2000 00:11:02.484 [ 00:11:02.484 { 00:11:02.484 "name": "6d0a168e-6471-4a6b-a666-96195b9a163f", 00:11:02.484 "aliases": [ 00:11:02.484 "lvs/lvol" 00:11:02.484 ], 00:11:02.484 "product_name": "Logical Volume", 00:11:02.484 "block_size": 4096, 00:11:02.484 "num_blocks": 38912, 00:11:02.484 "uuid": "6d0a168e-6471-4a6b-a666-96195b9a163f", 00:11:02.484 "assigned_rate_limits": { 00:11:02.484 "rw_ios_per_sec": 0, 00:11:02.484 "rw_mbytes_per_sec": 0, 00:11:02.484 "r_mbytes_per_sec": 0, 00:11:02.484 "w_mbytes_per_sec": 0 00:11:02.484 }, 00:11:02.484 "claimed": false, 00:11:02.484 "zoned": false, 00:11:02.484 "supported_io_types": { 00:11:02.484 "read": true, 00:11:02.484 "write": true, 00:11:02.484 "unmap": true, 00:11:02.484 "flush": false, 00:11:02.484 "reset": true, 00:11:02.484 "nvme_admin": false, 00:11:02.484 "nvme_io": false, 00:11:02.484 "nvme_io_md": false, 00:11:02.484 "write_zeroes": true, 00:11:02.484 "zcopy": false, 00:11:02.484 "get_zone_info": false, 00:11:02.484 "zone_management": false, 00:11:02.484 "zone_append": false, 00:11:02.484 "compare": false, 00:11:02.484 "compare_and_write": false, 00:11:02.484 "abort": false, 00:11:02.484 "seek_hole": true, 00:11:02.484 "seek_data": true, 00:11:02.484 "copy": false, 00:11:02.484 "nvme_iov_md": false 00:11:02.484 }, 00:11:02.484 "driver_specific": { 00:11:02.484 "lvol": { 00:11:02.484 "lvol_store_uuid": "62bad3f8-8377-485e-8cae-697654c5e09c", 00:11:02.484 "base_bdev": "aio_bdev", 00:11:02.484 "thin_provision": false, 00:11:02.484 "num_allocated_clusters": 38, 00:11:02.484 "snapshot": false, 00:11:02.484 "clone": false, 00:11:02.484 "esnap_clone": false 00:11:02.484 } 00:11:02.484 } 00:11:02.484 } 00:11:02.484 ] 00:11:02.484 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:02.484 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:02.484 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:02.742 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:02.742 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:02.742 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:03.019 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:03.019 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:03.305 [2024-11-04 17:12:03.888140] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:03.305 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:03.564 request: 00:11:03.564 { 00:11:03.564 "uuid": "62bad3f8-8377-485e-8cae-697654c5e09c", 00:11:03.564 "method": "bdev_lvol_get_lvstores", 00:11:03.564 "req_id": 1 00:11:03.564 } 00:11:03.564 Got JSON-RPC error response 00:11:03.564 response: 00:11:03.564 { 00:11:03.564 "code": -19, 00:11:03.564 "message": "No such device" 00:11:03.564 } 00:11:03.564 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:03.564 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:03.564 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:03.564 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:03.564 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:03.823 aio_bdev 00:11:03.823 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6d0a168e-6471-4a6b-a666-96195b9a163f 00:11:03.823 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6d0a168e-6471-4a6b-a666-96195b9a163f 00:11:03.823 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:03.823 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:03.823 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:03.823 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:03.823 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:04.082 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d0a168e-6471-4a6b-a666-96195b9a163f -t 2000 00:11:04.340 [ 00:11:04.340 { 00:11:04.340 "name": "6d0a168e-6471-4a6b-a666-96195b9a163f", 00:11:04.341 "aliases": [ 00:11:04.341 "lvs/lvol" 00:11:04.341 ], 00:11:04.341 "product_name": "Logical Volume", 00:11:04.341 "block_size": 4096, 00:11:04.341 "num_blocks": 38912, 00:11:04.341 "uuid": "6d0a168e-6471-4a6b-a666-96195b9a163f", 00:11:04.341 "assigned_rate_limits": { 00:11:04.341 "rw_ios_per_sec": 0, 00:11:04.341 "rw_mbytes_per_sec": 0, 00:11:04.341 "r_mbytes_per_sec": 0, 00:11:04.341 "w_mbytes_per_sec": 0 00:11:04.341 }, 00:11:04.341 "claimed": false, 00:11:04.341 "zoned": false, 00:11:04.341 "supported_io_types": { 00:11:04.341 "read": true, 00:11:04.341 "write": true, 00:11:04.341 "unmap": true, 00:11:04.341 "flush": false, 00:11:04.341 "reset": true, 00:11:04.341 "nvme_admin": false, 00:11:04.341 "nvme_io": false, 00:11:04.341 "nvme_io_md": false, 00:11:04.341 "write_zeroes": true, 00:11:04.341 "zcopy": false, 00:11:04.341 "get_zone_info": false, 00:11:04.341 "zone_management": false, 00:11:04.341 "zone_append": false, 00:11:04.341 "compare": false, 00:11:04.341 "compare_and_write": false, 00:11:04.341 "abort": false, 00:11:04.341 "seek_hole": true, 00:11:04.341 "seek_data": true, 00:11:04.341 "copy": false, 00:11:04.341 "nvme_iov_md": false 00:11:04.341 }, 00:11:04.341 "driver_specific": { 00:11:04.341 "lvol": { 00:11:04.341 "lvol_store_uuid": "62bad3f8-8377-485e-8cae-697654c5e09c", 00:11:04.341 "base_bdev": "aio_bdev", 00:11:04.341 "thin_provision": false, 00:11:04.341 "num_allocated_clusters": 38, 00:11:04.341 "snapshot": false, 00:11:04.341 "clone": false, 00:11:04.341 "esnap_clone": false 00:11:04.341 } 00:11:04.341 } 00:11:04.341 } 00:11:04.341 ] 00:11:04.341 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:04.341 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:04.341 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:04.600 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:04.600 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:04.600 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:04.859 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:04.859 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6d0a168e-6471-4a6b-a666-96195b9a163f 00:11:05.118 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62bad3f8-8377-485e-8cae-697654c5e09c 00:11:05.376 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:05.648 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:06.216 ************************************ 00:11:06.216 END TEST lvs_grow_dirty 00:11:06.216 ************************************ 00:11:06.216 00:11:06.216 real 0m21.138s 00:11:06.216 user 0m42.429s 00:11:06.216 sys 0m9.219s 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:06.216 nvmf_trace.0 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.216 17:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:06.475 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.476 rmmod nvme_tcp 00:11:06.476 rmmod nvme_fabrics 00:11:06.476 rmmod nvme_keyring 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63693 ']' 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63693 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63693 ']' 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63693 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63693 00:11:06.476 killing process with pid 63693 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63693' 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63693 00:11:06.476 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63693 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.734 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.993 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:06.993 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:06.994 00:11:06.994 real 0m42.001s 00:11:06.994 user 1m6.228s 00:11:06.994 sys 0m12.648s 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:06.994 ************************************ 00:11:06.994 END TEST nvmf_lvs_grow 00:11:06.994 ************************************ 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.994 ************************************ 00:11:06.994 START TEST nvmf_bdev_io_wait 00:11:06.994 ************************************ 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:06.994 * Looking for test storage... 00:11:06.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.994 --rc genhtml_branch_coverage=1 00:11:06.994 --rc genhtml_function_coverage=1 00:11:06.994 --rc genhtml_legend=1 00:11:06.994 --rc geninfo_all_blocks=1 00:11:06.994 --rc geninfo_unexecuted_blocks=1 00:11:06.994 00:11:06.994 ' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.994 --rc genhtml_branch_coverage=1 00:11:06.994 --rc genhtml_function_coverage=1 00:11:06.994 --rc genhtml_legend=1 00:11:06.994 --rc geninfo_all_blocks=1 00:11:06.994 --rc geninfo_unexecuted_blocks=1 00:11:06.994 00:11:06.994 ' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.994 --rc genhtml_branch_coverage=1 00:11:06.994 --rc genhtml_function_coverage=1 00:11:06.994 --rc genhtml_legend=1 00:11:06.994 --rc geninfo_all_blocks=1 00:11:06.994 --rc geninfo_unexecuted_blocks=1 00:11:06.994 00:11:06.994 ' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:06.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.994 --rc genhtml_branch_coverage=1 00:11:06.994 --rc genhtml_function_coverage=1 00:11:06.994 --rc genhtml_legend=1 00:11:06.994 --rc geninfo_all_blocks=1 00:11:06.994 --rc geninfo_unexecuted_blocks=1 00:11:06.994 00:11:06.994 ' 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.994 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:07.254 Cannot find device "nvmf_init_br" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:07.254 Cannot find device "nvmf_init_br2" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:07.254 Cannot find device "nvmf_tgt_br" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:07.254 Cannot find device "nvmf_tgt_br2" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:07.254 Cannot find device "nvmf_init_br" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:07.254 Cannot find device "nvmf_init_br2" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:07.254 Cannot find device "nvmf_tgt_br" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:07.254 Cannot find device "nvmf_tgt_br2" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:07.254 Cannot find device "nvmf_br" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:07.254 Cannot find device "nvmf_init_if" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:07.254 Cannot find device "nvmf_init_if2" 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:07.254 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:07.255 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:07.255 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:07.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:11:07.589 00:11:07.589 --- 10.0.0.3 ping statistics --- 00:11:07.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.589 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:07.589 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:07.589 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:11:07.589 00:11:07.589 --- 10.0.0.4 ping statistics --- 00:11:07.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.589 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:07.589 00:11:07.589 --- 10.0.0.1 ping statistics --- 00:11:07.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.589 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:07.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:11:07.589 00:11:07.589 --- 10.0.0.2 ping statistics --- 00:11:07.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.589 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64065 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64065 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 64065 ']' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.589 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.589 [2024-11-04 17:12:08.246163] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:07.589 [2024-11-04 17:12:08.246274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.589 [2024-11-04 17:12:08.390650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.849 [2024-11-04 17:12:08.449374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.849 [2024-11-04 17:12:08.449471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.849 [2024-11-04 17:12:08.449485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.849 [2024-11-04 17:12:08.449493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.849 [2024-11-04 17:12:08.449500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.849 [2024-11-04 17:12:08.450633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.849 [2024-11-04 17:12:08.451720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.849 [2024-11-04 17:12:08.451929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.849 [2024-11-04 17:12:08.451940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.849 [2024-11-04 17:12:08.631000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.849 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:07.849 [2024-11-04 17:12:08.647649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 Malloc0 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:08.108 [2024-11-04 17:12:08.700655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64087 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64089 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:08.108 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64091 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:08.109 { 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme$subsystem", 00:11:08.109 "trtype": "$TEST_TRANSPORT", 00:11:08.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "$NVMF_PORT", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:08.109 "hdgst": ${hdgst:-false}, 00:11:08.109 "ddgst": ${ddgst:-false} 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 } 00:11:08.109 EOF 00:11:08.109 )") 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64093 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:08.109 { 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme$subsystem", 00:11:08.109 "trtype": "$TEST_TRANSPORT", 00:11:08.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "$NVMF_PORT", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:08.109 "hdgst": ${hdgst:-false}, 00:11:08.109 "ddgst": ${ddgst:-false} 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 } 00:11:08.109 EOF 00:11:08.109 )") 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:08.109 { 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme$subsystem", 00:11:08.109 "trtype": "$TEST_TRANSPORT", 00:11:08.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "$NVMF_PORT", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:08.109 "hdgst": ${hdgst:-false}, 00:11:08.109 "ddgst": ${ddgst:-false} 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 } 00:11:08.109 EOF 00:11:08.109 )") 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:08.109 { 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme$subsystem", 00:11:08.109 "trtype": "$TEST_TRANSPORT", 00:11:08.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "$NVMF_PORT", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:08.109 "hdgst": ${hdgst:-false}, 00:11:08.109 "ddgst": ${ddgst:-false} 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 } 00:11:08.109 EOF 00:11:08.109 )") 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme1", 00:11:08.109 "trtype": "tcp", 00:11:08.109 "traddr": "10.0.0.3", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "4420", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:08.109 "hdgst": false, 00:11:08.109 "ddgst": false 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 }' 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme1", 00:11:08.109 "trtype": "tcp", 00:11:08.109 "traddr": "10.0.0.3", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "4420", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:08.109 "hdgst": false, 00:11:08.109 "ddgst": false 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 }' 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme1", 00:11:08.109 "trtype": "tcp", 00:11:08.109 "traddr": "10.0.0.3", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "4420", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:08.109 "hdgst": false, 00:11:08.109 "ddgst": false 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 }' 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:08.109 "params": { 00:11:08.109 "name": "Nvme1", 00:11:08.109 "trtype": "tcp", 00:11:08.109 "traddr": "10.0.0.3", 00:11:08.109 "adrfam": "ipv4", 00:11:08.109 "trsvcid": "4420", 00:11:08.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:08.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:08.109 "hdgst": false, 00:11:08.109 "ddgst": false 00:11:08.109 }, 00:11:08.109 "method": "bdev_nvme_attach_controller" 00:11:08.109 }' 00:11:08.109 [2024-11-04 17:12:08.766243] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:08.109 [2024-11-04 17:12:08.766334] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:08.109 [2024-11-04 17:12:08.782595] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:08.109 [2024-11-04 17:12:08.782677] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:08.109 [2024-11-04 17:12:08.786163] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:08.109 [2024-11-04 17:12:08.786393] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:08.109 17:12:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64087 00:11:08.109 [2024-11-04 17:12:08.795957] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:08.109 [2024-11-04 17:12:08.796035] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:08.369 [2024-11-04 17:12:08.989547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.369 [2024-11-04 17:12:09.044797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:08.369 [2024-11-04 17:12:09.058805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.369 [2024-11-04 17:12:09.066669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.369 [2024-11-04 17:12:09.122515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:08.369 [2024-11-04 17:12:09.136412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.369 [2024-11-04 17:12:09.141806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.628 [2024-11-04 17:12:09.197792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:08.628 [2024-11-04 17:12:09.211806] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.628 Running I/O for 1 seconds... 00:11:08.628 [2024-11-04 17:12:09.220314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.628 Running I/O for 1 seconds... 00:11:08.628 [2024-11-04 17:12:09.275847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:08.628 [2024-11-04 17:12:09.289680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.628 Running I/O for 1 seconds... 00:11:08.628 Running I/O for 1 seconds... 00:11:09.564 6808.00 IOPS, 26.59 MiB/s 00:11:09.564 Latency(us) 00:11:09.564 [2024-11-04T17:12:10.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.564 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:09.564 Nvme1n1 : 1.02 6790.88 26.53 0.00 0.00 18582.89 7477.06 33125.47 00:11:09.564 [2024-11-04T17:12:10.368Z] =================================================================================================================== 00:11:09.564 [2024-11-04T17:12:10.368Z] Total : 6790.88 26.53 0.00 0.00 18582.89 7477.06 33125.47 00:11:09.564 7432.00 IOPS, 29.03 MiB/s 00:11:09.564 Latency(us) 00:11:09.564 [2024-11-04T17:12:10.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.564 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:09.564 Nvme1n1 : 1.01 7467.79 29.17 0.00 0.00 17025.34 8757.99 25380.31 00:11:09.564 [2024-11-04T17:12:10.368Z] =================================================================================================================== 00:11:09.564 [2024-11-04T17:12:10.368Z] Total : 7467.79 29.17 0.00 0.00 17025.34 8757.99 25380.31 00:11:09.564 174368.00 IOPS, 681.12 MiB/s 00:11:09.564 Latency(us) 00:11:09.564 [2024-11-04T17:12:10.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.564 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:09.564 Nvme1n1 : 1.00 173987.49 679.64 0.00 0.00 731.71 389.12 2189.50 00:11:09.564 [2024-11-04T17:12:10.368Z] =================================================================================================================== 00:11:09.564 [2024-11-04T17:12:10.368Z] Total : 173987.49 679.64 0.00 0.00 731.71 389.12 2189.50 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64089 00:11:09.823 7184.00 IOPS, 28.06 MiB/s [2024-11-04T17:12:10.627Z] 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64091 00:11:09.823 00:11:09.823 Latency(us) 00:11:09.823 [2024-11-04T17:12:10.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.823 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:09.823 Nvme1n1 : 1.01 7313.43 28.57 0.00 0.00 17445.11 5332.25 45279.42 00:11:09.823 [2024-11-04T17:12:10.627Z] =================================================================================================================== 00:11:09.823 [2024-11-04T17:12:10.627Z] Total : 7313.43 28.57 0.00 0.00 17445.11 5332.25 45279.42 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64093 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.823 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.823 rmmod nvme_tcp 00:11:10.082 rmmod nvme_fabrics 00:11:10.082 rmmod nvme_keyring 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64065 ']' 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64065 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 64065 ']' 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 64065 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64065 00:11:10.082 killing process with pid 64065 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64065' 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 64065 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 64065 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.082 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:10.341 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.341 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:10.341 00:11:10.341 real 0m3.509s 00:11:10.341 user 0m14.192s 00:11:10.341 sys 0m2.217s 00:11:10.341 ************************************ 00:11:10.341 END TEST nvmf_bdev_io_wait 00:11:10.341 ************************************ 00:11:10.342 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.342 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:10.601 17:12:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:10.601 17:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:10.601 17:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.602 ************************************ 00:11:10.602 START TEST nvmf_queue_depth 00:11:10.602 ************************************ 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:10.602 * Looking for test storage... 00:11:10.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:10.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.602 --rc genhtml_branch_coverage=1 00:11:10.602 --rc genhtml_function_coverage=1 00:11:10.602 --rc genhtml_legend=1 00:11:10.602 --rc geninfo_all_blocks=1 00:11:10.602 --rc geninfo_unexecuted_blocks=1 00:11:10.602 00:11:10.602 ' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:10.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.602 --rc genhtml_branch_coverage=1 00:11:10.602 --rc genhtml_function_coverage=1 00:11:10.602 --rc genhtml_legend=1 00:11:10.602 --rc geninfo_all_blocks=1 00:11:10.602 --rc geninfo_unexecuted_blocks=1 00:11:10.602 00:11:10.602 ' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:10.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.602 --rc genhtml_branch_coverage=1 00:11:10.602 --rc genhtml_function_coverage=1 00:11:10.602 --rc genhtml_legend=1 00:11:10.602 --rc geninfo_all_blocks=1 00:11:10.602 --rc geninfo_unexecuted_blocks=1 00:11:10.602 00:11:10.602 ' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:10.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.602 --rc genhtml_branch_coverage=1 00:11:10.602 --rc genhtml_function_coverage=1 00:11:10.602 --rc genhtml_legend=1 00:11:10.602 --rc geninfo_all_blocks=1 00:11:10.602 --rc geninfo_unexecuted_blocks=1 00:11:10.602 00:11:10.602 ' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.602 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:10.603 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:10.862 Cannot find device "nvmf_init_br" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:10.862 Cannot find device "nvmf_init_br2" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:10.862 Cannot find device "nvmf_tgt_br" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.862 Cannot find device "nvmf_tgt_br2" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:10.862 Cannot find device "nvmf_init_br" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:10.862 Cannot find device "nvmf_init_br2" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:10.862 Cannot find device "nvmf_tgt_br" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:10.862 Cannot find device "nvmf_tgt_br2" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:10.862 Cannot find device "nvmf_br" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:10.862 Cannot find device "nvmf_init_if" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:10.862 Cannot find device "nvmf_init_if2" 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:10.862 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:11.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:11.121 00:11:11.121 --- 10.0.0.3 ping statistics --- 00:11:11.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.121 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:11.121 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:11.121 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:11:11.121 00:11:11.121 --- 10.0.0.4 ping statistics --- 00:11:11.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.121 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:11.121 00:11:11.121 --- 10.0.0.1 ping statistics --- 00:11:11.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.121 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:11.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:11.121 00:11:11.121 --- 10.0.0.2 ping statistics --- 00:11:11.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.121 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:11.121 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64355 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64355 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64355 ']' 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:11.122 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.122 [2024-11-04 17:12:11.885727] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:11.122 [2024-11-04 17:12:11.885842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.381 [2024-11-04 17:12:12.045014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.381 [2024-11-04 17:12:12.111757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.381 [2024-11-04 17:12:12.111823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.381 [2024-11-04 17:12:12.111846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.381 [2024-11-04 17:12:12.111857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.381 [2024-11-04 17:12:12.111879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.381 [2024-11-04 17:12:12.112373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.381 [2024-11-04 17:12:12.169099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 [2024-11-04 17:12:12.286506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 Malloc0 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 [2024-11-04 17:12:12.338460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64385 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64385 /var/tmp/bdevperf.sock 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64385 ']' 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:11.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:11.641 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 [2024-11-04 17:12:12.402951] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:11.641 [2024-11-04 17:12:12.403451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64385 ] 00:11:11.901 [2024-11-04 17:12:12.557492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.901 [2024-11-04 17:12:12.615060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.901 [2024-11-04 17:12:12.674089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.160 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:12.160 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:12.160 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:12.160 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.160 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:12.160 NVMe0n1 00:11:12.160 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.160 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:12.160 Running I/O for 10 seconds... 00:11:14.474 7182.00 IOPS, 28.05 MiB/s [2024-11-04T17:12:16.215Z] 7690.00 IOPS, 30.04 MiB/s [2024-11-04T17:12:17.151Z] 7833.00 IOPS, 30.60 MiB/s [2024-11-04T17:12:18.089Z] 7956.75 IOPS, 31.08 MiB/s [2024-11-04T17:12:19.026Z] 8110.80 IOPS, 31.68 MiB/s [2024-11-04T17:12:20.036Z] 8234.67 IOPS, 32.17 MiB/s [2024-11-04T17:12:20.974Z] 8267.57 IOPS, 32.30 MiB/s [2024-11-04T17:12:22.353Z] 8230.62 IOPS, 32.15 MiB/s [2024-11-04T17:12:23.290Z] 8214.00 IOPS, 32.09 MiB/s [2024-11-04T17:12:23.290Z] 8216.40 IOPS, 32.10 MiB/s 00:11:22.486 Latency(us) 00:11:22.486 [2024-11-04T17:12:23.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.486 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:22.486 Verification LBA range: start 0x0 length 0x4000 00:11:22.486 NVMe0n1 : 10.07 8255.22 32.25 0.00 0.00 123502.03 17635.14 91035.46 00:11:22.486 [2024-11-04T17:12:23.290Z] =================================================================================================================== 00:11:22.486 [2024-11-04T17:12:23.290Z] Total : 8255.22 32.25 0.00 0.00 123502.03 17635.14 91035.46 00:11:22.486 { 00:11:22.486 "results": [ 00:11:22.486 { 00:11:22.487 "job": "NVMe0n1", 00:11:22.487 "core_mask": "0x1", 00:11:22.487 "workload": "verify", 00:11:22.487 "status": "finished", 00:11:22.487 "verify_range": { 00:11:22.487 "start": 0, 00:11:22.487 "length": 16384 00:11:22.487 }, 00:11:22.487 "queue_depth": 1024, 00:11:22.487 "io_size": 4096, 00:11:22.487 "runtime": 10.074836, 00:11:22.487 "iops": 8255.221226429889, 00:11:22.487 "mibps": 32.24695791574175, 00:11:22.487 "io_failed": 0, 00:11:22.487 "io_timeout": 0, 00:11:22.487 "avg_latency_us": 123502.02859684982, 00:11:22.487 "min_latency_us": 17635.14181818182, 00:11:22.487 "max_latency_us": 91035.46181818182 00:11:22.487 } 00:11:22.487 ], 00:11:22.487 "core_count": 1 00:11:22.487 } 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64385 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64385 ']' 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64385 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64385 00:11:22.487 killing process with pid 64385 00:11:22.487 Received shutdown signal, test time was about 10.000000 seconds 00:11:22.487 00:11:22.487 Latency(us) 00:11:22.487 [2024-11-04T17:12:23.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.487 [2024-11-04T17:12:23.291Z] =================================================================================================================== 00:11:22.487 [2024-11-04T17:12:23.291Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64385' 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64385 00:11:22.487 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64385 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:22.746 rmmod nvme_tcp 00:11:22.746 rmmod nvme_fabrics 00:11:22.746 rmmod nvme_keyring 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64355 ']' 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64355 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64355 ']' 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64355 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64355 00:11:22.746 killing process with pid 64355 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64355' 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64355 00:11:22.746 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64355 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.005 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:23.006 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:23.265 00:11:23.265 real 0m12.781s 00:11:23.265 user 0m21.309s 00:11:23.265 sys 0m2.476s 00:11:23.265 ************************************ 00:11:23.265 END TEST nvmf_queue_depth 00:11:23.265 ************************************ 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.265 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:23.265 17:12:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:23.265 17:12:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:23.265 17:12:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.265 17:12:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.265 ************************************ 00:11:23.265 START TEST nvmf_target_multipath 00:11:23.265 ************************************ 00:11:23.265 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:23.524 * Looking for test storage... 00:11:23.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:23.524 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.525 --rc genhtml_branch_coverage=1 00:11:23.525 --rc genhtml_function_coverage=1 00:11:23.525 --rc genhtml_legend=1 00:11:23.525 --rc geninfo_all_blocks=1 00:11:23.525 --rc geninfo_unexecuted_blocks=1 00:11:23.525 00:11:23.525 ' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.525 --rc genhtml_branch_coverage=1 00:11:23.525 --rc genhtml_function_coverage=1 00:11:23.525 --rc genhtml_legend=1 00:11:23.525 --rc geninfo_all_blocks=1 00:11:23.525 --rc geninfo_unexecuted_blocks=1 00:11:23.525 00:11:23.525 ' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.525 --rc genhtml_branch_coverage=1 00:11:23.525 --rc genhtml_function_coverage=1 00:11:23.525 --rc genhtml_legend=1 00:11:23.525 --rc geninfo_all_blocks=1 00:11:23.525 --rc geninfo_unexecuted_blocks=1 00:11:23.525 00:11:23.525 ' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:23.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.525 --rc genhtml_branch_coverage=1 00:11:23.525 --rc genhtml_function_coverage=1 00:11:23.525 --rc genhtml_legend=1 00:11:23.525 --rc geninfo_all_blocks=1 00:11:23.525 --rc geninfo_unexecuted_blocks=1 00:11:23.525 00:11:23.525 ' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.525 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:23.525 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:23.526 Cannot find device "nvmf_init_br" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:23.526 Cannot find device "nvmf_init_br2" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:23.526 Cannot find device "nvmf_tgt_br" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.526 Cannot find device "nvmf_tgt_br2" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:23.526 Cannot find device "nvmf_init_br" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:23.526 Cannot find device "nvmf_init_br2" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:23.526 Cannot find device "nvmf_tgt_br" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:23.526 Cannot find device "nvmf_tgt_br2" 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:23.526 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:23.785 Cannot find device "nvmf_br" 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:23.785 Cannot find device "nvmf_init_if" 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:23.785 Cannot find device "nvmf_init_if2" 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:23.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:23.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.168 ms 00:11:23.785 00:11:23.785 --- 10.0.0.3 ping statistics --- 00:11:23.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.785 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:11:23.785 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:23.786 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:23.786 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:11:23.786 00:11:23.786 --- 10.0.0.4 ping statistics --- 00:11:23.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.786 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:23.786 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:23.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:23.786 00:11:23.786 --- 10.0.0.1 ping statistics --- 00:11:23.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.786 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:24.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:24.044 00:11:24.044 --- 10.0.0.2 ping statistics --- 00:11:24.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.044 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64754 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64754 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 64754 ']' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:24.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:24.044 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:24.044 [2024-11-04 17:12:24.680749] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:24.044 [2024-11-04 17:12:24.680865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.044 [2024-11-04 17:12:24.837641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.321 [2024-11-04 17:12:24.933227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.321 [2024-11-04 17:12:24.933319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.321 [2024-11-04 17:12:24.933336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.321 [2024-11-04 17:12:24.933349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.321 [2024-11-04 17:12:24.933360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.321 [2024-11-04 17:12:24.935057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.321 [2024-11-04 17:12:24.935271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.321 [2024-11-04 17:12:24.935351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.321 [2024-11-04 17:12:24.935366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.321 [2024-11-04 17:12:25.013286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.257 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:25.257 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:11:25.257 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.257 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.257 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:25.257 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.257 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:25.516 [2024-11-04 17:12:26.066777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.516 17:12:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:25.775 Malloc0 00:11:25.775 17:12:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:26.034 17:12:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.294 17:12:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:26.553 [2024-11-04 17:12:27.130990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:26.553 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:26.813 [2024-11-04 17:12:27.383262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:26.813 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:26.813 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:27.074 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.074 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:11:27.074 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.074 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:27.074 17:12:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64849 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:28.980 17:12:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:28.980 [global] 00:11:28.980 thread=1 00:11:28.980 invalidate=1 00:11:28.980 rw=randrw 00:11:28.980 time_based=1 00:11:28.980 runtime=6 00:11:28.980 ioengine=libaio 00:11:28.980 direct=1 00:11:28.980 bs=4096 00:11:28.980 iodepth=128 00:11:28.980 norandommap=0 00:11:28.980 numjobs=1 00:11:28.980 00:11:28.980 verify_dump=1 00:11:28.980 verify_backlog=512 00:11:28.980 verify_state_save=0 00:11:28.980 do_verify=1 00:11:28.980 verify=crc32c-intel 00:11:28.980 [job0] 00:11:28.980 filename=/dev/nvme0n1 00:11:28.980 Could not set queue depth (nvme0n1) 00:11:29.239 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.239 fio-3.35 00:11:29.239 Starting 1 thread 00:11:30.176 17:12:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:30.435 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:30.694 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:30.953 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:31.212 17:12:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64849 00:11:35.410 00:11:35.410 job0: (groupid=0, jobs=1): err= 0: pid=64870: Mon Nov 4 17:12:36 2024 00:11:35.410 read: IOPS=9363, BW=36.6MiB/s (38.4MB/s)(220MiB/6007msec) 00:11:35.410 slat (usec): min=3, max=7878, avg=62.49, stdev=244.68 00:11:35.410 clat (usec): min=1362, max=20379, avg=9318.39, stdev=1601.75 00:11:35.410 lat (usec): min=1382, max=20390, avg=9380.87, stdev=1605.41 00:11:35.410 clat percentiles (usec): 00:11:35.410 | 1.00th=[ 4883], 5.00th=[ 7177], 10.00th=[ 8029], 20.00th=[ 8586], 00:11:35.410 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:11:35.410 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[12911], 00:11:35.410 | 99.00th=[14484], 99.50th=[15008], 99.90th=[17171], 99.95th=[18220], 00:11:35.410 | 99.99th=[19530] 00:11:35.410 bw ( KiB/s): min= 7696, max=24376, per=51.07%, avg=19127.18, stdev=5977.14, samples=11 00:11:35.410 iops : min= 1924, max= 6094, avg=4781.73, stdev=1494.25, samples=11 00:11:35.410 write: IOPS=5625, BW=22.0MiB/s (23.0MB/s)(114MiB/5194msec); 0 zone resets 00:11:35.410 slat (usec): min=14, max=2884, avg=73.38, stdev=178.20 00:11:35.411 clat (usec): min=1189, max=16557, avg=8136.64, stdev=1461.85 00:11:35.411 lat (usec): min=1287, max=16895, avg=8210.02, stdev=1467.47 00:11:35.411 clat percentiles (usec): 00:11:35.411 | 1.00th=[ 3687], 5.00th=[ 4883], 10.00th=[ 6259], 20.00th=[ 7570], 00:11:35.411 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8586], 00:11:35.411 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[ 9765], 00:11:35.411 | 99.00th=[12518], 99.50th=[13304], 99.90th=[14746], 99.95th=[15401], 00:11:35.411 | 99.99th=[16188] 00:11:35.411 bw ( KiB/s): min= 8104, max=24526, per=85.06%, avg=19141.45, stdev=5871.79, samples=11 00:11:35.411 iops : min= 2026, max= 6131, avg=4785.27, stdev=1467.89, samples=11 00:11:35.411 lat (msec) : 2=0.01%, 4=0.81%, 10=85.67%, 20=13.50%, 50=0.01% 00:11:35.411 cpu : usr=5.28%, sys=21.15%, ctx=4972, majf=0, minf=127 00:11:35.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:35.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:35.411 issued rwts: total=56249,29220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:35.411 00:11:35.411 Run status group 0 (all jobs): 00:11:35.411 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=220MiB (230MB), run=6007-6007msec 00:11:35.411 WRITE: bw=22.0MiB/s (23.0MB/s), 22.0MiB/s-22.0MiB/s (23.0MB/s-23.0MB/s), io=114MiB (120MB), run=5194-5194msec 00:11:35.411 00:11:35.411 Disk stats (read/write): 00:11:35.411 nvme0n1: ios=55402/28651, merge=0/0, ticks=496656/219239, in_queue=715895, util=98.66% 00:11:35.411 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:35.669 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64947 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:35.928 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:35.928 [global] 00:11:35.928 thread=1 00:11:35.928 invalidate=1 00:11:35.928 rw=randrw 00:11:35.928 time_based=1 00:11:35.928 runtime=6 00:11:35.928 ioengine=libaio 00:11:35.928 direct=1 00:11:35.928 bs=4096 00:11:35.928 iodepth=128 00:11:35.928 norandommap=0 00:11:35.928 numjobs=1 00:11:35.928 00:11:36.187 verify_dump=1 00:11:36.187 verify_backlog=512 00:11:36.187 verify_state_save=0 00:11:36.187 do_verify=1 00:11:36.187 verify=crc32c-intel 00:11:36.187 [job0] 00:11:36.187 filename=/dev/nvme0n1 00:11:36.187 Could not set queue depth (nvme0n1) 00:11:36.187 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.187 fio-3.35 00:11:36.187 Starting 1 thread 00:11:37.123 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:37.382 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:37.641 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:37.900 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:38.467 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64947 00:11:42.656 00:11:42.656 job0: (groupid=0, jobs=1): err= 0: pid=64974: Mon Nov 4 17:12:43 2024 00:11:42.656 read: IOPS=9639, BW=37.7MiB/s (39.5MB/s)(226MiB/6007msec) 00:11:42.656 slat (usec): min=3, max=6373, avg=50.62, stdev=215.36 00:11:42.656 clat (usec): min=290, max=25684, avg=9210.81, stdev=2763.50 00:11:42.656 lat (usec): min=322, max=25694, avg=9261.43, stdev=2772.46 00:11:42.656 clat percentiles (usec): 00:11:42.656 | 1.00th=[ 2507], 5.00th=[ 4490], 10.00th=[ 5604], 20.00th=[ 7242], 00:11:42.656 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9765], 00:11:42.656 | 70.00th=[10290], 80.00th=[10814], 90.00th=[12256], 95.00th=[14091], 00:11:42.656 | 99.00th=[16909], 99.50th=[18482], 99.90th=[21365], 99.95th=[22676], 00:11:42.656 | 99.99th=[25560] 00:11:42.656 bw ( KiB/s): min= 3824, max=32032, per=51.33%, avg=19791.33, stdev=7532.81, samples=12 00:11:42.656 iops : min= 956, max= 8008, avg=4947.83, stdev=1883.20, samples=12 00:11:42.656 write: IOPS=5789, BW=22.6MiB/s (23.7MB/s)(117MiB/5157msec); 0 zone resets 00:11:42.656 slat (usec): min=12, max=6474, avg=63.07, stdev=162.25 00:11:42.656 clat (usec): min=688, max=20230, avg=7689.07, stdev=2480.20 00:11:42.656 lat (usec): min=760, max=20270, avg=7752.14, stdev=2494.39 00:11:42.656 clat percentiles (usec): 00:11:42.656 | 1.00th=[ 2278], 5.00th=[ 3556], 10.00th=[ 4228], 20.00th=[ 5080], 00:11:42.656 | 30.00th=[ 6325], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8586], 00:11:42.656 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[11207], 00:11:42.656 | 99.00th=[13829], 99.50th=[14615], 99.90th=[17171], 99.95th=[18744], 00:11:42.656 | 99.99th=[19530] 00:11:42.656 bw ( KiB/s): min= 4104, max=32776, per=85.75%, avg=19857.33, stdev=7411.04, samples=12 00:11:42.656 iops : min= 1026, max= 8194, avg=4964.33, stdev=1852.76, samples=12 00:11:42.656 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.04% 00:11:42.656 lat (msec) : 2=0.48%, 4=4.51%, 10=66.90%, 20=27.85%, 50=0.16% 00:11:42.656 cpu : usr=5.74%, sys=21.71%, ctx=5323, majf=0, minf=102 00:11:42.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:42.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:42.656 issued rwts: total=57903,29856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:42.656 00:11:42.656 Run status group 0 (all jobs): 00:11:42.656 READ: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=226MiB (237MB), run=6007-6007msec 00:11:42.656 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=117MiB (122MB), run=5157-5157msec 00:11:42.656 00:11:42.657 Disk stats (read/write): 00:11:42.657 nvme0n1: ios=57400/29117, merge=0/0, ticks=507069/208795, in_queue=715864, util=98.70% 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:42.657 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.916 rmmod nvme_tcp 00:11:42.916 rmmod nvme_fabrics 00:11:42.916 rmmod nvme_keyring 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64754 ']' 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64754 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 64754 ']' 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 64754 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64754 00:11:42.916 killing process with pid 64754 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64754' 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 64754 00:11:42.916 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 64754 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.175 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:43.434 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:43.434 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:43.434 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:43.434 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:43.434 00:11:43.434 real 0m20.121s 00:11:43.434 user 1m15.603s 00:11:43.434 sys 0m8.990s 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.434 ************************************ 00:11:43.434 END TEST nvmf_target_multipath 00:11:43.434 ************************************ 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:43.434 ************************************ 00:11:43.434 START TEST nvmf_zcopy 00:11:43.434 ************************************ 00:11:43.434 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:43.694 * Looking for test storage... 00:11:43.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.694 --rc genhtml_branch_coverage=1 00:11:43.694 --rc genhtml_function_coverage=1 00:11:43.694 --rc genhtml_legend=1 00:11:43.694 --rc geninfo_all_blocks=1 00:11:43.694 --rc geninfo_unexecuted_blocks=1 00:11:43.694 00:11:43.694 ' 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.694 --rc genhtml_branch_coverage=1 00:11:43.694 --rc genhtml_function_coverage=1 00:11:43.694 --rc genhtml_legend=1 00:11:43.694 --rc geninfo_all_blocks=1 00:11:43.694 --rc geninfo_unexecuted_blocks=1 00:11:43.694 00:11:43.694 ' 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.694 --rc genhtml_branch_coverage=1 00:11:43.694 --rc genhtml_function_coverage=1 00:11:43.694 --rc genhtml_legend=1 00:11:43.694 --rc geninfo_all_blocks=1 00:11:43.694 --rc geninfo_unexecuted_blocks=1 00:11:43.694 00:11:43.694 ' 00:11:43.694 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.694 --rc genhtml_branch_coverage=1 00:11:43.694 --rc genhtml_function_coverage=1 00:11:43.694 --rc genhtml_legend=1 00:11:43.694 --rc geninfo_all_blocks=1 00:11:43.695 --rc geninfo_unexecuted_blocks=1 00:11:43.695 00:11:43.695 ' 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.695 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:43.695 Cannot find device "nvmf_init_br" 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:43.695 Cannot find device "nvmf_init_br2" 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:43.695 Cannot find device "nvmf_tgt_br" 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.695 Cannot find device "nvmf_tgt_br2" 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:43.695 Cannot find device "nvmf_init_br" 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:43.695 Cannot find device "nvmf_init_br2" 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:43.695 Cannot find device "nvmf_tgt_br" 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:43.695 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:43.954 Cannot find device "nvmf_tgt_br2" 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:43.954 Cannot find device "nvmf_br" 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:43.954 Cannot find device "nvmf_init_if" 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:43.954 Cannot find device "nvmf_init_if2" 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:43.954 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:43.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:43.955 00:11:43.955 --- 10.0.0.3 ping statistics --- 00:11:43.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.955 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:43.955 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:43.955 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:11:43.955 00:11:43.955 --- 10.0.0.4 ping statistics --- 00:11:43.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.955 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:43.955 00:11:43.955 --- 10.0.0.1 ping statistics --- 00:11:43.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.955 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:43.955 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:44.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:44.214 00:11:44.214 --- 10.0.0.2 ping statistics --- 00:11:44.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.214 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65277 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65277 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 65277 ']' 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:44.214 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.214 [2024-11-04 17:12:44.853038] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:44.214 [2024-11-04 17:12:44.853163] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.214 [2024-11-04 17:12:45.002985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.473 [2024-11-04 17:12:45.060541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.473 [2024-11-04 17:12:45.060596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.473 [2024-11-04 17:12:45.060608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.473 [2024-11-04 17:12:45.060617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.473 [2024-11-04 17:12:45.060625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.473 [2024-11-04 17:12:45.061043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.473 [2024-11-04 17:12:45.119538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.473 [2024-11-04 17:12:45.245312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.473 [2024-11-04 17:12:45.261467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.473 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.733 malloc0 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:44.733 { 00:11:44.733 "params": { 00:11:44.733 "name": "Nvme$subsystem", 00:11:44.733 "trtype": "$TEST_TRANSPORT", 00:11:44.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:44.733 "adrfam": "ipv4", 00:11:44.733 "trsvcid": "$NVMF_PORT", 00:11:44.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:44.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:44.733 "hdgst": ${hdgst:-false}, 00:11:44.733 "ddgst": ${ddgst:-false} 00:11:44.733 }, 00:11:44.733 "method": "bdev_nvme_attach_controller" 00:11:44.733 } 00:11:44.733 EOF 00:11:44.733 )") 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:44.733 17:12:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:44.733 "params": { 00:11:44.733 "name": "Nvme1", 00:11:44.733 "trtype": "tcp", 00:11:44.733 "traddr": "10.0.0.3", 00:11:44.733 "adrfam": "ipv4", 00:11:44.733 "trsvcid": "4420", 00:11:44.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:44.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:44.733 "hdgst": false, 00:11:44.733 "ddgst": false 00:11:44.733 }, 00:11:44.733 "method": "bdev_nvme_attach_controller" 00:11:44.733 }' 00:11:44.733 [2024-11-04 17:12:45.358994] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:44.733 [2024-11-04 17:12:45.359098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65303 ] 00:11:44.733 [2024-11-04 17:12:45.512343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.991 [2024-11-04 17:12:45.586075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.991 [2024-11-04 17:12:45.679528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.250 Running I/O for 10 seconds... 00:11:47.149 5840.00 IOPS, 45.62 MiB/s [2024-11-04T17:12:48.889Z] 5913.50 IOPS, 46.20 MiB/s [2024-11-04T17:12:49.824Z] 5949.00 IOPS, 46.48 MiB/s [2024-11-04T17:12:51.246Z] 6014.00 IOPS, 46.98 MiB/s [2024-11-04T17:12:52.183Z] 6047.80 IOPS, 47.25 MiB/s [2024-11-04T17:12:53.120Z] 6095.50 IOPS, 47.62 MiB/s [2024-11-04T17:12:54.082Z] 6155.29 IOPS, 48.09 MiB/s [2024-11-04T17:12:55.020Z] 6191.62 IOPS, 48.37 MiB/s [2024-11-04T17:12:55.982Z] 6192.78 IOPS, 48.38 MiB/s [2024-11-04T17:12:55.982Z] 6192.10 IOPS, 48.38 MiB/s 00:11:55.178 Latency(us) 00:11:55.178 [2024-11-04T17:12:55.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.178 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:55.178 Verification LBA range: start 0x0 length 0x1000 00:11:55.178 Nvme1n1 : 10.02 6194.41 48.39 0.00 0.00 20599.22 2129.92 32887.16 00:11:55.178 [2024-11-04T17:12:55.982Z] =================================================================================================================== 00:11:55.178 [2024-11-04T17:12:55.982Z] Total : 6194.41 48.39 0.00 0.00 20599.22 2129.92 32887.16 00:11:55.437 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65420 00:11:55.437 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:55.437 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.437 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:55.438 { 00:11:55.438 "params": { 00:11:55.438 "name": "Nvme$subsystem", 00:11:55.438 "trtype": "$TEST_TRANSPORT", 00:11:55.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:55.438 "adrfam": "ipv4", 00:11:55.438 "trsvcid": "$NVMF_PORT", 00:11:55.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:55.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:55.438 "hdgst": ${hdgst:-false}, 00:11:55.438 "ddgst": ${ddgst:-false} 00:11:55.438 }, 00:11:55.438 "method": "bdev_nvme_attach_controller" 00:11:55.438 } 00:11:55.438 EOF 00:11:55.438 )") 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:55.438 [2024-11-04 17:12:56.040558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.040598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:55.438 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:55.438 "params": { 00:11:55.438 "name": "Nvme1", 00:11:55.438 "trtype": "tcp", 00:11:55.438 "traddr": "10.0.0.3", 00:11:55.438 "adrfam": "ipv4", 00:11:55.438 "trsvcid": "4420", 00:11:55.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:55.438 "hdgst": false, 00:11:55.438 "ddgst": false 00:11:55.438 }, 00:11:55.438 "method": "bdev_nvme_attach_controller" 00:11:55.438 }' 00:11:55.438 [2024-11-04 17:12:56.052502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.052532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.064496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.064540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.076497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.076541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.088517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.088559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.092419] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:11:55.438 [2024-11-04 17:12:56.092541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65420 ] 00:11:55.438 [2024-11-04 17:12:56.100522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.100550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.112507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.112549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.124529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.124570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.136537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.136579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.148538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.148581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.160522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.160562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.172526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.172566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.184555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.184583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.196556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.196597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.208558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.208618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.220552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.220582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.438 [2024-11-04 17:12:56.232557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.438 [2024-11-04 17:12:56.232584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.241934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.697 [2024-11-04 17:12:56.244559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.244586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.256587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.256621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.268582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.268610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.280595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.280651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.292592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.292634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.304594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.304622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.307290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.697 [2024-11-04 17:12:56.316611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.316661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.328647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.328694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.340629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.340684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.352615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.352661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.364623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.364657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.376620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.376666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.380093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.697 [2024-11-04 17:12:56.388618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.388660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.697 [2024-11-04 17:12:56.400624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.697 [2024-11-04 17:12:56.400670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.412644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.412675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.424624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.424667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.436639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.436686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.448645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.448693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.460667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.460700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.472673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.472718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.484689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.484733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.698 [2024-11-04 17:12:56.496699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.698 [2024-11-04 17:12:56.496746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 Running I/O for 5 seconds... 00:11:55.957 [2024-11-04 17:12:56.508716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.508761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.525688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.525769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.536806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.536853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.553432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.553480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.567825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.567903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.583797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.583845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.602746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.602793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.616499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.616548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.632022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.632068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.649709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.649757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.665377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.665411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.683896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.683946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.698284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.698370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.714396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.714443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.730823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.730886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.957 [2024-11-04 17:12:56.749590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.957 [2024-11-04 17:12:56.749625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.764179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.764238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.774419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.774466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.789257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.789362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.798415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.798460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.814522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.814571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.823743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.823777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.840232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.840312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.858068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.858116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.875342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.875403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.890733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.890768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.900228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.900285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.915426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.915475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.931921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.931956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.948996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.949052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.965083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.965129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.984067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.984113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:56.999168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:56.999243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.216 [2024-11-04 17:12:57.017863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.216 [2024-11-04 17:12:57.017910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.031632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.031681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.046856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.046935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.055925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.055972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.071194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.071272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.086614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.086663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.103965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.104011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.120004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.120052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.129793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.129878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.145824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.145872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.157543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.157598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.172901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.172948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.189158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.189232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.207187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.207246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.222212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.222289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.238465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.238514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.256930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.256977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.476 [2024-11-04 17:12:57.271794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.476 [2024-11-04 17:12:57.271841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.735 [2024-11-04 17:12:57.286239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.286302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.302705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.302751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.318030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.318079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.327597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.327629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.343086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.343133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.357983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.358031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.373202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.373277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.382479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.382527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.398455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.398514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.414186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.414244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.431032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.431080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.448966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.449013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.464130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.464179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.475273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.475327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.491654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.491703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 12070.00 IOPS, 94.30 MiB/s [2024-11-04T17:12:57.540Z] [2024-11-04 17:12:57.507034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.507081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.516454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.516505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.736 [2024-11-04 17:12:57.531068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.736 [2024-11-04 17:12:57.531115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.995 [2024-11-04 17:12:57.546458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.546504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.557519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.557568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.573186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.573260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.590031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.590077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.606165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.606243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.623935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.623983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.637966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.638011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.653999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.654046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.670746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.670812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.688249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.688301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.705761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.705824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.721120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.721170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.732385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.732432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.740780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.740826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.755322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.755367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.771528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.771590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.996 [2024-11-04 17:12:57.788060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.996 [2024-11-04 17:12:57.788123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.804488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.804537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.822448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.822495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.839035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.839083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.856671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.856705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.871436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.871482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.887330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.887377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.903865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.903915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.920071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.920118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.936862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.936910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.953330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.953400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.969793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.969840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:57.988019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:57.988066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:58.002732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:58.002779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:58.018070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:58.018127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:58.036236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:58.036281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.255 [2024-11-04 17:12:58.051137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.255 [2024-11-04 17:12:58.051186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.066093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.066143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.083632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.083681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.098356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.098401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.113869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.113931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.131806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.131854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.146979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.147025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.158300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.158345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.174541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.174588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.190153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.190235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.206058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.206104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.223522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.223569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.238652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.238699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.253569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.253616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.263141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.263188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.278881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.278927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.295852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.295900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.515 [2024-11-04 17:12:58.311144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.515 [2024-11-04 17:12:58.311191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.327523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.327569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.343385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.343431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.360873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.360938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.376348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.376395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.386125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.386161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.401807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.401870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.418220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.418277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.435831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.435866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.451176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.451251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.461012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.461060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.476609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.476657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.495184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.495247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 12245.50 IOPS, 95.67 MiB/s [2024-11-04T17:12:58.579Z] [2024-11-04 17:12:58.510067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.510113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.526625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.526672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.543099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.543146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.558337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.558382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.775 [2024-11-04 17:12:58.574271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.775 [2024-11-04 17:12:58.574328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.591031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.591077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.607028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.607077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.625645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.625693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.639307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.639353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.656852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.656900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.671235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.671317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.689047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.689097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.703948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.704004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.714065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.714114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.728662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.728709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.744612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.744659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.760636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.760683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.772125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.772171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.780333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.780379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.795341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.795386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.811248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.811321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.035 [2024-11-04 17:12:58.827273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.035 [2024-11-04 17:12:58.827320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.845470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.845521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.860513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.860559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.870609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.870661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.886791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.886868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.901292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.901340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.916836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.916883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.933643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.933691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.951353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.951400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.965552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.965600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.981667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.981733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:58.998064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:58.998116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:59.015648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:59.015725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:59.030062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:59.030113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:59.047091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:59.047139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:59.061866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:59.061921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:59.076517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:59.076564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.294 [2024-11-04 17:12:59.092852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.294 [2024-11-04 17:12:59.092899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.107945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.107992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.123732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.123780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.139983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.140031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.157315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.157393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.173253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.173298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.191287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.191338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.205128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.205175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.220762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.220810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.239450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.239497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.254113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.254160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.265631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.265681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.280469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.280532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.289979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.290027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.305495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.305531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.322524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.322574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.338322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.338368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.554 [2024-11-04 17:12:59.356089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.554 [2024-11-04 17:12:59.356139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 [2024-11-04 17:12:59.370309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-11-04 17:12:59.370355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 [2024-11-04 17:12:59.385414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-11-04 17:12:59.385448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 [2024-11-04 17:12:59.403093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.841 [2024-11-04 17:12:59.403142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.841 [2024-11-04 17:12:59.416904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.416952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.432834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.432912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.451119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.451168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.464583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.464630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.479334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.479381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.488505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.488539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.504631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.504689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 12315.67 IOPS, 96.22 MiB/s [2024-11-04T17:12:59.646Z] [2024-11-04 17:12:59.514501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.514534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.526591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.526639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.542448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.542496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.559531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.559585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.576917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.576969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.592416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.592463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.602743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.602791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.842 [2024-11-04 17:12:59.617940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.842 [2024-11-04 17:12:59.617996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.634087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.634133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.652326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.652371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.667576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.667624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.683271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.683305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.701078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.701125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.717347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.717412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.734099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.734159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.749252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.749316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.764650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.764685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.774322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.774371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.790040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.790086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.807240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.807300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.823110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.823158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.841557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.841590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.856518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.856554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.874107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.874155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.101 [2024-11-04 17:12:59.890090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.101 [2024-11-04 17:12:59.890139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:12:59.908442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:12:59.908488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:12:59.922977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:12:59.923025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:12:59.939712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:12:59.939760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:12:59.954404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:12:59.954453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:12:59.972211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:12:59.972270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:12:59.986919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:12:59.986967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:12:59.995692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:12:59.995740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.012159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.012233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.030098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.030144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.045030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.045077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.060918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.060984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.078931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.078980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.093859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.093907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.111245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.111305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.126106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.126159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.136093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.136140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.360 [2024-11-04 17:13:00.151653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.360 [2024-11-04 17:13:00.151702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.168157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.168204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.184988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.185038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.202631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.202678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.217705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.217756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.229217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.229296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.245091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.245138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.261748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.261797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.280223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.280284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.294148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.294195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.309584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.309618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.325825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.325859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.619 [2024-11-04 17:13:00.344213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.619 [2024-11-04 17:13:00.344303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.620 [2024-11-04 17:13:00.358496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.620 [2024-11-04 17:13:00.358543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.620 [2024-11-04 17:13:00.374553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.620 [2024-11-04 17:13:00.374602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.620 [2024-11-04 17:13:00.390616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.620 [2024-11-04 17:13:00.390663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.620 [2024-11-04 17:13:00.408314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.620 [2024-11-04 17:13:00.408360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 [2024-11-04 17:13:00.423506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.879 [2024-11-04 17:13:00.423553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 [2024-11-04 17:13:00.432263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.879 [2024-11-04 17:13:00.432311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 [2024-11-04 17:13:00.448005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.879 [2024-11-04 17:13:00.448052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 [2024-11-04 17:13:00.462970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.879 [2024-11-04 17:13:00.463017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 [2024-11-04 17:13:00.474125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.879 [2024-11-04 17:13:00.474173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 [2024-11-04 17:13:00.490277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.879 [2024-11-04 17:13:00.490322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 12225.75 IOPS, 95.51 MiB/s [2024-11-04T17:13:00.683Z] [2024-11-04 17:13:00.506718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.879 [2024-11-04 17:13:00.506780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.879 [2024-11-04 17:13:00.518081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.518129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.533898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.533944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.551367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.551412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.567354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.567401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.584963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.585012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.599476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.599523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.615168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.615241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.632519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.632567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.648880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.648928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.665714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.665764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.880 [2024-11-04 17:13:00.680904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.880 [2024-11-04 17:13:00.680952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.697288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.697335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.712989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.713036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.729809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.729856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.747813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.747859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.763034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.763080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.774456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.774520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.790429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.790477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.807635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.807681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.823927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.823975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.839683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.839732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.849450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.849498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.865790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.865836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.883175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.883246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.898435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.898485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.908337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.908384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.924407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.924456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.139 [2024-11-04 17:13:00.941191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.139 [2024-11-04 17:13:00.941272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.398 [2024-11-04 17:13:00.956890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.398 [2024-11-04 17:13:00.956937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.398 [2024-11-04 17:13:00.972430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.398 [2024-11-04 17:13:00.972476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.398 [2024-11-04 17:13:00.989477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.398 [2024-11-04 17:13:00.989529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.398 [2024-11-04 17:13:01.006597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.398 [2024-11-04 17:13:01.006661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.398 [2024-11-04 17:13:01.022422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.398 [2024-11-04 17:13:01.022467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.398 [2024-11-04 17:13:01.039033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.398 [2024-11-04 17:13:01.039080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.398 [2024-11-04 17:13:01.055539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.398 [2024-11-04 17:13:01.055586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.066752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.066798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.083007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.083054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.099346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.099394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.115981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.116029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.131936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.131988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.150357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.150406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.165224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.165281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.175374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.175422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.399 [2024-11-04 17:13:01.187156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.399 [2024-11-04 17:13:01.187203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.658 [2024-11-04 17:13:01.204250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.658 [2024-11-04 17:13:01.204309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.658 [2024-11-04 17:13:01.220550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.658 [2024-11-04 17:13:01.220583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.658 [2024-11-04 17:13:01.237476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.658 [2024-11-04 17:13:01.237511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.658 [2024-11-04 17:13:01.252317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.658 [2024-11-04 17:13:01.252365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.658 [2024-11-04 17:13:01.268157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.658 [2024-11-04 17:13:01.268232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.658 [2024-11-04 17:13:01.287492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.658 [2024-11-04 17:13:01.287560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.658 [2024-11-04 17:13:01.303042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.658 [2024-11-04 17:13:01.303091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.320000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.320058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.337902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.337952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.353382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.353416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.371001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.371049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.386642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.386692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.402819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.402885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.420921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.420970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.437136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.437184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.659 [2024-11-04 17:13:01.453386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.659 [2024-11-04 17:13:01.453422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.463668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.463703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.475393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.475441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.486264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.486322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.501832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.501897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 12220.00 IOPS, 95.47 MiB/s [2024-11-04T17:13:01.722Z] [2024-11-04 17:13:01.513028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.513075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 00:12:00.918 Latency(us) 00:12:00.918 [2024-11-04T17:13:01.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.918 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:00.918 Nvme1n1 : 5.01 12227.14 95.52 0.00 0.00 10457.80 4110.89 19422.49 00:12:00.918 [2024-11-04T17:13:01.722Z] =================================================================================================================== 00:12:00.918 [2024-11-04T17:13:01.722Z] Total : 12227.14 95.52 0.00 0.00 10457.80 4110.89 19422.49 00:12:00.918 [2024-11-04 17:13:01.525045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.525106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.537031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.537077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.549067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.549139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.561057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.561109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.573057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.573123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.585096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.585152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.597069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.597121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.609094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.609147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.621085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.621145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.633104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.633161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.645087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.645140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.657110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.918 [2024-11-04 17:13:01.657161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.918 [2024-11-04 17:13:01.669082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.919 [2024-11-04 17:13:01.669127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.919 [2024-11-04 17:13:01.681104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.919 [2024-11-04 17:13:01.681149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.919 [2024-11-04 17:13:01.693105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.919 [2024-11-04 17:13:01.693154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.919 [2024-11-04 17:13:01.705141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.919 [2024-11-04 17:13:01.705194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.919 [2024-11-04 17:13:01.717129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:00.919 [2024-11-04 17:13:01.717173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.178 [2024-11-04 17:13:01.729094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:01.178 [2024-11-04 17:13:01.729136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.178 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65420) - No such process 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65420 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.178 delay0 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.178 17:13:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:01.178 [2024-11-04 17:13:01.940834] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:07.764 Initializing NVMe Controllers 00:12:07.764 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:07.764 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:07.764 Initialization complete. Launching workers. 00:12:07.764 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 95 00:12:07.764 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 33 00:12:07.764 success 268, unsuccessful 114, failed 0 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.764 rmmod nvme_tcp 00:12:07.764 rmmod nvme_fabrics 00:12:07.764 rmmod nvme_keyring 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65277 ']' 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65277 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 65277 ']' 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 65277 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65277 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:07.764 killing process with pid 65277 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65277' 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 65277 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 65277 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.764 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:08.024 00:12:08.024 real 0m24.410s 00:12:08.024 user 0m39.593s 00:12:08.024 sys 0m7.138s 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:08.024 ************************************ 00:12:08.024 END TEST nvmf_zcopy 00:12:08.024 ************************************ 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:08.024 ************************************ 00:12:08.024 START TEST nvmf_nmic 00:12:08.024 ************************************ 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:08.024 * Looking for test storage... 00:12:08.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:08.024 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.025 --rc genhtml_branch_coverage=1 00:12:08.025 --rc genhtml_function_coverage=1 00:12:08.025 --rc genhtml_legend=1 00:12:08.025 --rc geninfo_all_blocks=1 00:12:08.025 --rc geninfo_unexecuted_blocks=1 00:12:08.025 00:12:08.025 ' 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.025 --rc genhtml_branch_coverage=1 00:12:08.025 --rc genhtml_function_coverage=1 00:12:08.025 --rc genhtml_legend=1 00:12:08.025 --rc geninfo_all_blocks=1 00:12:08.025 --rc geninfo_unexecuted_blocks=1 00:12:08.025 00:12:08.025 ' 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.025 --rc genhtml_branch_coverage=1 00:12:08.025 --rc genhtml_function_coverage=1 00:12:08.025 --rc genhtml_legend=1 00:12:08.025 --rc geninfo_all_blocks=1 00:12:08.025 --rc geninfo_unexecuted_blocks=1 00:12:08.025 00:12:08.025 ' 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.025 --rc genhtml_branch_coverage=1 00:12:08.025 --rc genhtml_function_coverage=1 00:12:08.025 --rc genhtml_legend=1 00:12:08.025 --rc geninfo_all_blocks=1 00:12:08.025 --rc geninfo_unexecuted_blocks=1 00:12:08.025 00:12:08.025 ' 00:12:08.025 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.284 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.285 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:08.285 Cannot find device "nvmf_init_br" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:08.285 Cannot find device "nvmf_init_br2" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:08.285 Cannot find device "nvmf_tgt_br" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.285 Cannot find device "nvmf_tgt_br2" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:08.285 Cannot find device "nvmf_init_br" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:08.285 Cannot find device "nvmf_init_br2" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:08.285 Cannot find device "nvmf_tgt_br" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:08.285 Cannot find device "nvmf_tgt_br2" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:08.285 Cannot find device "nvmf_br" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:08.285 Cannot find device "nvmf_init_if" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:08.285 Cannot find device "nvmf_init_if2" 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:08.285 17:13:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.285 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:08.285 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.285 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.285 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.285 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:08.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:12:08.545 00:12:08.545 --- 10.0.0.3 ping statistics --- 00:12:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.545 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:08.545 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:08.545 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:12:08.545 00:12:08.545 --- 10.0.0.4 ping statistics --- 00:12:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.545 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:08.545 00:12:08.545 --- 10.0.0.1 ping statistics --- 00:12:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.545 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:08.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:08.545 00:12:08.545 --- 10.0.0.2 ping statistics --- 00:12:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.545 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.545 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65800 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65800 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 65800 ']' 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:08.546 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:08.546 [2024-11-04 17:13:09.337675] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:12:08.546 [2024-11-04 17:13:09.337797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.805 [2024-11-04 17:13:09.481099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.805 [2024-11-04 17:13:09.542381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.805 [2024-11-04 17:13:09.542638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.805 [2024-11-04 17:13:09.542929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.805 [2024-11-04 17:13:09.543118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.805 [2024-11-04 17:13:09.543297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.805 [2024-11-04 17:13:09.544742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.805 [2024-11-04 17:13:09.544874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.805 [2024-11-04 17:13:09.544919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.805 [2024-11-04 17:13:09.544923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.068 [2024-11-04 17:13:09.620623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.068 [2024-11-04 17:13:09.743403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.068 Malloc0 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:09.068 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.069 [2024-11-04 17:13:09.811441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:09.069 test case1: single bdev can't be used in multiple subsystems 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.069 [2024-11-04 17:13:09.835204] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:09.069 [2024-11-04 17:13:09.835255] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:09.069 [2024-11-04 17:13:09.835268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.069 request: 00:12:09.069 { 00:12:09.069 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:09.069 "namespace": { 00:12:09.069 "bdev_name": "Malloc0", 00:12:09.069 "no_auto_visible": false 00:12:09.069 }, 00:12:09.069 "method": "nvmf_subsystem_add_ns", 00:12:09.069 "req_id": 1 00:12:09.069 } 00:12:09.069 Got JSON-RPC error response 00:12:09.069 response: 00:12:09.069 { 00:12:09.069 "code": -32602, 00:12:09.069 "message": "Invalid parameters" 00:12:09.069 } 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:09.069 Adding namespace failed - expected result. 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:09.069 test case2: host connect to nvmf target in multiple paths 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:09.069 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.070 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.070 [2024-11-04 17:13:09.851380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:09.070 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.070 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:09.331 17:13:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:09.331 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.331 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:12:09.331 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.331 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:09.331 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:12:11.864 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:11.864 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:11.864 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.864 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:11.864 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.864 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:12:11.864 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:11.864 [global] 00:12:11.864 thread=1 00:12:11.864 invalidate=1 00:12:11.864 rw=write 00:12:11.864 time_based=1 00:12:11.864 runtime=1 00:12:11.864 ioengine=libaio 00:12:11.864 direct=1 00:12:11.864 bs=4096 00:12:11.864 iodepth=1 00:12:11.864 norandommap=0 00:12:11.864 numjobs=1 00:12:11.864 00:12:11.864 verify_dump=1 00:12:11.864 verify_backlog=512 00:12:11.864 verify_state_save=0 00:12:11.864 do_verify=1 00:12:11.864 verify=crc32c-intel 00:12:11.864 [job0] 00:12:11.864 filename=/dev/nvme0n1 00:12:11.864 Could not set queue depth (nvme0n1) 00:12:11.864 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:11.864 fio-3.35 00:12:11.864 Starting 1 thread 00:12:12.818 00:12:12.818 job0: (groupid=0, jobs=1): err= 0: pid=65880: Mon Nov 4 17:13:13 2024 00:12:12.818 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:12.818 slat (nsec): min=11074, max=51284, avg=13753.84, stdev=4293.28 00:12:12.818 clat (usec): min=142, max=4614, avg=202.10, stdev=155.87 00:12:12.818 lat (usec): min=155, max=4636, avg=215.85, stdev=156.33 00:12:12.818 clat percentiles (usec): 00:12:12.818 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 176], 00:12:12.818 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:12:12.818 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 237], 00:12:12.818 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 3687], 99.95th=[ 4228], 00:12:12.818 | 99.99th=[ 4621] 00:12:12.818 write: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:12:12.818 slat (usec): min=16, max=106, avg=20.91, stdev= 6.61 00:12:12.818 clat (usec): min=85, max=1261, avg=124.97, stdev=37.15 00:12:12.818 lat (usec): min=102, max=1281, avg=145.88, stdev=37.77 00:12:12.818 clat percentiles (usec): 00:12:12.818 | 1.00th=[ 91], 5.00th=[ 97], 10.00th=[ 101], 20.00th=[ 106], 00:12:12.818 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 124], 00:12:12.818 | 70.00th=[ 133], 80.00th=[ 143], 90.00th=[ 157], 95.00th=[ 167], 00:12:12.818 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 537], 99.95th=[ 1012], 00:12:12.818 | 99.99th=[ 1254] 00:12:12.818 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:12:12.818 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:12.818 lat (usec) : 100=5.12%, 250=93.39%, 500=1.31%, 750=0.02%, 1000=0.04% 00:12:12.818 lat (msec) : 2=0.04%, 4=0.05%, 10=0.04% 00:12:12.818 cpu : usr=2.50%, sys=7.10%, ctx=5570, majf=0, minf=5 00:12:12.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.818 issued rwts: total=2560,3009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.818 00:12:12.818 Run status group 0 (all jobs): 00:12:12.818 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:12.818 WRITE: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=11.8MiB (12.3MB), run=1001-1001msec 00:12:12.818 00:12:12.818 Disk stats (read/write): 00:12:12.818 nvme0n1: ios=2421/2560, merge=0/0, ticks=509/323, in_queue=832, util=90.88% 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.818 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.818 rmmod nvme_tcp 00:12:13.121 rmmod nvme_fabrics 00:12:13.121 rmmod nvme_keyring 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65800 ']' 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65800 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 65800 ']' 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 65800 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65800 00:12:13.121 killing process with pid 65800 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65800' 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 65800 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 65800 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:13.121 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:13.380 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:13.380 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:13.380 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:13.380 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:13.380 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:13.380 00:12:13.380 real 0m5.531s 00:12:13.380 user 0m16.070s 00:12:13.380 sys 0m2.382s 00:12:13.380 ************************************ 00:12:13.380 END TEST nvmf_nmic 00:12:13.380 ************************************ 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.380 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:13.640 ************************************ 00:12:13.640 START TEST nvmf_fio_target 00:12:13.640 ************************************ 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:13.640 * Looking for test storage... 00:12:13.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:13.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.640 --rc genhtml_branch_coverage=1 00:12:13.640 --rc genhtml_function_coverage=1 00:12:13.640 --rc genhtml_legend=1 00:12:13.640 --rc geninfo_all_blocks=1 00:12:13.640 --rc geninfo_unexecuted_blocks=1 00:12:13.640 00:12:13.640 ' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:13.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.640 --rc genhtml_branch_coverage=1 00:12:13.640 --rc genhtml_function_coverage=1 00:12:13.640 --rc genhtml_legend=1 00:12:13.640 --rc geninfo_all_blocks=1 00:12:13.640 --rc geninfo_unexecuted_blocks=1 00:12:13.640 00:12:13.640 ' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:13.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.640 --rc genhtml_branch_coverage=1 00:12:13.640 --rc genhtml_function_coverage=1 00:12:13.640 --rc genhtml_legend=1 00:12:13.640 --rc geninfo_all_blocks=1 00:12:13.640 --rc geninfo_unexecuted_blocks=1 00:12:13.640 00:12:13.640 ' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:13.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.640 --rc genhtml_branch_coverage=1 00:12:13.640 --rc genhtml_function_coverage=1 00:12:13.640 --rc genhtml_legend=1 00:12:13.640 --rc geninfo_all_blocks=1 00:12:13.640 --rc geninfo_unexecuted_blocks=1 00:12:13.640 00:12:13.640 ' 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.640 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:13.900 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:13.901 Cannot find device "nvmf_init_br" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:13.901 Cannot find device "nvmf_init_br2" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:13.901 Cannot find device "nvmf_tgt_br" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:13.901 Cannot find device "nvmf_tgt_br2" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:13.901 Cannot find device "nvmf_init_br" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:13.901 Cannot find device "nvmf_init_br2" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:13.901 Cannot find device "nvmf_tgt_br" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:13.901 Cannot find device "nvmf_tgt_br2" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:13.901 Cannot find device "nvmf_br" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:13.901 Cannot find device "nvmf_init_if" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:13.901 Cannot find device "nvmf_init_if2" 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:13.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:13.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:13.901 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:14.160 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:14.160 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:14.160 00:12:14.160 --- 10.0.0.3 ping statistics --- 00:12:14.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.160 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:14.160 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:14.160 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:12:14.160 00:12:14.160 --- 10.0.0.4 ping statistics --- 00:12:14.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.160 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:14.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:14.160 00:12:14.160 --- 10.0.0.1 ping statistics --- 00:12:14.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.160 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:14.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:14.160 00:12:14.160 --- 10.0.0.2 ping statistics --- 00:12:14.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.160 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66119 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66119 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 66119 ']' 00:12:14.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:14.160 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.160 [2024-11-04 17:13:14.952667] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:12:14.160 [2024-11-04 17:13:14.953003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.419 [2024-11-04 17:13:15.109067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.419 [2024-11-04 17:13:15.170825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.419 [2024-11-04 17:13:15.170902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.419 [2024-11-04 17:13:15.170917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.419 [2024-11-04 17:13:15.170928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.419 [2024-11-04 17:13:15.170937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.419 [2024-11-04 17:13:15.175253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.419 [2024-11-04 17:13:15.175458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.419 [2024-11-04 17:13:15.176493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.419 [2024-11-04 17:13:15.176505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.678 [2024-11-04 17:13:15.234622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.678 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:14.678 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:12:14.678 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.678 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:14.678 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.678 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.678 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:14.936 [2024-11-04 17:13:15.635646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.936 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.502 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:15.502 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.502 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:15.502 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.760 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:15.760 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:16.332 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:16.332 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:16.590 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:16.848 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:16.848 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:17.106 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:17.106 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:17.422 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:17.422 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:17.422 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.989 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:17.989 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.989 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:17.989 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.262 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:18.521 [2024-11-04 17:13:19.246127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:18.521 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:18.780 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:19.038 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:19.296 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:19.296 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:12:19.296 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.296 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:12:19.296 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:12:19.296 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:12:21.199 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:21.199 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:21.199 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.199 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:12:21.199 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.199 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:12:21.199 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:21.458 [global] 00:12:21.458 thread=1 00:12:21.458 invalidate=1 00:12:21.458 rw=write 00:12:21.458 time_based=1 00:12:21.458 runtime=1 00:12:21.458 ioengine=libaio 00:12:21.458 direct=1 00:12:21.458 bs=4096 00:12:21.458 iodepth=1 00:12:21.458 norandommap=0 00:12:21.458 numjobs=1 00:12:21.458 00:12:21.458 verify_dump=1 00:12:21.458 verify_backlog=512 00:12:21.458 verify_state_save=0 00:12:21.458 do_verify=1 00:12:21.458 verify=crc32c-intel 00:12:21.458 [job0] 00:12:21.458 filename=/dev/nvme0n1 00:12:21.458 [job1] 00:12:21.458 filename=/dev/nvme0n2 00:12:21.458 [job2] 00:12:21.458 filename=/dev/nvme0n3 00:12:21.458 [job3] 00:12:21.458 filename=/dev/nvme0n4 00:12:21.458 Could not set queue depth (nvme0n1) 00:12:21.458 Could not set queue depth (nvme0n2) 00:12:21.458 Could not set queue depth (nvme0n3) 00:12:21.458 Could not set queue depth (nvme0n4) 00:12:21.458 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.458 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.458 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.458 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.458 fio-3.35 00:12:21.458 Starting 4 threads 00:12:23.315 00:12:23.315 job0: (groupid=0, jobs=1): err= 0: pid=66297: Mon Nov 4 17:13:23 2024 00:12:23.315 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:12:23.315 slat (nsec): min=10938, max=59215, avg=15382.77, stdev=4408.98 00:12:23.315 clat (usec): min=137, max=1069, avg=201.04, stdev=68.49 00:12:23.315 lat (usec): min=148, max=1089, avg=216.42, stdev=70.74 00:12:23.315 clat percentiles (usec): 00:12:23.315 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 161], 00:12:23.315 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:12:23.315 | 70.00th=[ 186], 80.00th=[ 231], 90.00th=[ 326], 95.00th=[ 355], 00:12:23.315 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 469], 99.95th=[ 586], 00:12:23.315 | 99.99th=[ 1074] 00:12:23.315 write: IOPS=2612, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1003msec); 0 zone resets 00:12:23.315 slat (nsec): min=14773, max=80938, avg=22665.26, stdev=6194.61 00:12:23.315 clat (usec): min=95, max=3474, avg=144.95, stdev=85.78 00:12:23.315 lat (usec): min=118, max=3506, avg=167.61, stdev=88.35 00:12:23.315 clat percentiles (usec): 00:12:23.315 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 119], 00:12:23.315 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:12:23.315 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 221], 95.00th=[ 265], 00:12:23.315 | 99.00th=[ 379], 99.50th=[ 416], 99.90th=[ 766], 99.95th=[ 889], 00:12:23.315 | 99.99th=[ 3490] 00:12:23.315 bw ( KiB/s): min= 8664, max=12288, per=27.40%, avg=10476.00, stdev=2562.55, samples=2 00:12:23.315 iops : min= 2166, max= 3072, avg=2619.00, stdev=640.64, samples=2 00:12:23.315 lat (usec) : 100=0.04%, 250=86.87%, 500=12.97%, 750=0.04%, 1000=0.04% 00:12:23.315 lat (msec) : 2=0.02%, 4=0.02% 00:12:23.315 cpu : usr=1.70%, sys=8.08%, ctx=5184, majf=0, minf=11 00:12:23.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.315 issued rwts: total=2560,2620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.315 job1: (groupid=0, jobs=1): err= 0: pid=66298: Mon Nov 4 17:13:23 2024 00:12:23.315 read: IOPS=1584, BW=6338KiB/s (6490kB/s)(6344KiB/1001msec) 00:12:23.315 slat (usec): min=11, max=292, avg=17.94, stdev= 9.08 00:12:23.315 clat (usec): min=141, max=3232, avg=292.88, stdev=95.72 00:12:23.315 lat (usec): min=154, max=3282, avg=310.82, stdev=97.59 00:12:23.315 clat percentiles (usec): 00:12:23.315 | 1.00th=[ 200], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 262], 00:12:23.315 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:12:23.315 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 375], 00:12:23.315 | 99.00th=[ 537], 99.50th=[ 685], 99.90th=[ 1303], 99.95th=[ 3228], 00:12:23.315 | 99.99th=[ 3228] 00:12:23.315 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:23.315 slat (usec): min=17, max=146, avg=26.49, stdev= 9.78 00:12:23.315 clat (usec): min=112, max=456, avg=217.40, stdev=40.04 00:12:23.315 lat (usec): min=131, max=489, avg=243.90, stdev=42.57 00:12:23.315 clat percentiles (usec): 00:12:23.315 | 1.00th=[ 133], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 194], 00:12:23.315 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:12:23.315 | 70.00th=[ 223], 80.00th=[ 237], 90.00th=[ 265], 95.00th=[ 293], 00:12:23.315 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 441], 99.95th=[ 441], 00:12:23.315 | 99.99th=[ 457] 00:12:23.315 bw ( KiB/s): min= 8192, max= 8192, per=21.43%, avg=8192.00, stdev= 0.00, samples=1 00:12:23.315 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:23.315 lat (usec) : 250=51.98%, 500=47.39%, 750=0.55%, 1000=0.03% 00:12:23.315 lat (msec) : 2=0.03%, 4=0.03% 00:12:23.315 cpu : usr=1.80%, sys=6.40%, ctx=3635, majf=0, minf=11 00:12:23.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.315 issued rwts: total=1586,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.315 job2: (groupid=0, jobs=1): err= 0: pid=66299: Mon Nov 4 17:13:23 2024 00:12:23.315 read: IOPS=1547, BW=6190KiB/s (6338kB/s)(6196KiB/1001msec) 00:12:23.315 slat (nsec): min=11790, max=45812, avg=15400.56, stdev=3725.06 00:12:23.315 clat (usec): min=187, max=2135, avg=291.54, stdev=64.09 00:12:23.315 lat (usec): min=203, max=2159, avg=306.94, stdev=65.31 00:12:23.315 clat percentiles (usec): 00:12:23.315 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:12:23.315 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:12:23.315 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 367], 00:12:23.315 | 99.00th=[ 478], 99.50th=[ 545], 99.90th=[ 783], 99.95th=[ 2147], 00:12:23.315 | 99.99th=[ 2147] 00:12:23.315 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:23.315 slat (nsec): min=16504, max=90114, avg=25363.40, stdev=7993.06 00:12:23.315 clat (usec): min=124, max=588, avg=227.58, stdev=52.17 00:12:23.315 lat (usec): min=145, max=627, avg=252.94, stdev=57.36 00:12:23.315 clat percentiles (usec): 00:12:23.315 | 1.00th=[ 151], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:12:23.315 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:12:23.316 | 70.00th=[ 229], 80.00th=[ 245], 90.00th=[ 285], 95.00th=[ 334], 00:12:23.316 | 99.00th=[ 449], 99.50th=[ 478], 99.90th=[ 519], 99.95th=[ 523], 00:12:23.316 | 99.99th=[ 586] 00:12:23.316 bw ( KiB/s): min= 8192, max= 8192, per=21.43%, avg=8192.00, stdev= 0.00, samples=1 00:12:23.316 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:23.316 lat (usec) : 250=48.62%, 500=50.96%, 750=0.36%, 1000=0.03% 00:12:23.316 lat (msec) : 4=0.03% 00:12:23.316 cpu : usr=2.20%, sys=5.30%, ctx=3597, majf=0, minf=13 00:12:23.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.316 issued rwts: total=1549,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.316 job3: (groupid=0, jobs=1): err= 0: pid=66300: Mon Nov 4 17:13:23 2024 00:12:23.316 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:23.316 slat (nsec): min=11106, max=59720, avg=13877.62, stdev=3064.41 00:12:23.316 clat (usec): min=151, max=1598, avg=188.33, stdev=44.13 00:12:23.316 lat (usec): min=162, max=1625, avg=202.21, stdev=44.67 00:12:23.316 clat percentiles (usec): 00:12:23.316 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:12:23.316 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:12:23.316 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 217], 95.00th=[ 241], 00:12:23.316 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 437], 99.95th=[ 1237], 00:12:23.316 | 99.99th=[ 1598] 00:12:23.316 write: IOPS=2868, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:12:23.316 slat (usec): min=15, max=124, avg=20.93, stdev= 4.83 00:12:23.316 clat (usec): min=104, max=523, avg=144.12, stdev=25.33 00:12:23.316 lat (usec): min=125, max=543, avg=165.05, stdev=26.20 00:12:23.316 clat percentiles (usec): 00:12:23.316 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 127], 00:12:23.316 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:12:23.316 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 174], 95.00th=[ 192], 00:12:23.316 | 99.00th=[ 233], 99.50th=[ 247], 99.90th=[ 322], 99.95th=[ 453], 00:12:23.316 | 99.99th=[ 523] 00:12:23.316 bw ( KiB/s): min=12288, max=12288, per=32.14%, avg=12288.00, stdev= 0.00, samples=1 00:12:23.316 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:23.316 lat (usec) : 250=97.90%, 500=2.04%, 750=0.02% 00:12:23.316 lat (msec) : 2=0.04% 00:12:23.316 cpu : usr=2.30%, sys=7.30%, ctx=5437, majf=0, minf=3 00:12:23.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.316 issued rwts: total=2560,2871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.316 00:12:23.316 Run status group 0 (all jobs): 00:12:23.316 READ: bw=32.1MiB/s (33.7MB/s), 6190KiB/s-9.99MiB/s (6338kB/s-10.5MB/s), io=32.2MiB (33.8MB), run=1001-1003msec 00:12:23.316 WRITE: bw=37.3MiB/s (39.1MB/s), 8184KiB/s-11.2MiB/s (8380kB/s-11.7MB/s), io=37.4MiB (39.3MB), run=1001-1003msec 00:12:23.316 00:12:23.316 Disk stats (read/write): 00:12:23.316 nvme0n1: ios=2225/2560, merge=0/0, ticks=457/391, in_queue=848, util=87.68% 00:12:23.316 nvme0n2: ios=1567/1636, merge=0/0, ticks=466/362, in_queue=828, util=87.21% 00:12:23.316 nvme0n3: ios=1536/1615, merge=0/0, ticks=452/356, in_queue=808, util=89.05% 00:12:23.316 nvme0n4: ios=2200/2560, merge=0/0, ticks=408/379, in_queue=787, util=89.61% 00:12:23.316 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:23.316 [global] 00:12:23.316 thread=1 00:12:23.316 invalidate=1 00:12:23.316 rw=randwrite 00:12:23.316 time_based=1 00:12:23.316 runtime=1 00:12:23.316 ioengine=libaio 00:12:23.316 direct=1 00:12:23.316 bs=4096 00:12:23.316 iodepth=1 00:12:23.316 norandommap=0 00:12:23.316 numjobs=1 00:12:23.316 00:12:23.316 verify_dump=1 00:12:23.316 verify_backlog=512 00:12:23.316 verify_state_save=0 00:12:23.316 do_verify=1 00:12:23.316 verify=crc32c-intel 00:12:23.316 [job0] 00:12:23.316 filename=/dev/nvme0n1 00:12:23.316 [job1] 00:12:23.316 filename=/dev/nvme0n2 00:12:23.316 [job2] 00:12:23.316 filename=/dev/nvme0n3 00:12:23.316 [job3] 00:12:23.316 filename=/dev/nvme0n4 00:12:23.316 Could not set queue depth (nvme0n1) 00:12:23.316 Could not set queue depth (nvme0n2) 00:12:23.316 Could not set queue depth (nvme0n3) 00:12:23.316 Could not set queue depth (nvme0n4) 00:12:23.316 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.316 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.316 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.316 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:23.316 fio-3.35 00:12:23.316 Starting 4 threads 00:12:24.251 00:12:24.251 job0: (groupid=0, jobs=1): err= 0: pid=66358: Mon Nov 4 17:13:24 2024 00:12:24.251 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:24.251 slat (nsec): min=7665, max=73978, avg=12749.84, stdev=4782.80 00:12:24.251 clat (usec): min=236, max=440, avg=302.10, stdev=29.87 00:12:24.251 lat (usec): min=247, max=452, avg=314.85, stdev=29.86 00:12:24.251 clat percentiles (usec): 00:12:24.251 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 277], 00:12:24.251 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:12:24.251 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 355], 00:12:24.251 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 392], 99.95th=[ 441], 00:12:24.251 | 99.99th=[ 441] 00:12:24.251 write: IOPS=2006, BW=8028KiB/s (8221kB/s)(8036KiB/1001msec); 0 zone resets 00:12:24.251 slat (nsec): min=9841, max=78841, avg=15828.23, stdev=6263.29 00:12:24.251 clat (usec): min=154, max=467, avg=238.84, stdev=30.04 00:12:24.251 lat (usec): min=176, max=480, avg=254.67, stdev=30.73 00:12:24.251 clat percentiles (usec): 00:12:24.251 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:12:24.251 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:12:24.251 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:12:24.251 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 363], 00:12:24.251 | 99.99th=[ 469] 00:12:24.251 bw ( KiB/s): min= 8192, max= 8192, per=22.44%, avg=8192.00, stdev= 0.00, samples=1 00:12:24.251 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:24.251 lat (usec) : 250=39.35%, 500=60.65% 00:12:24.251 cpu : usr=1.40%, sys=4.20%, ctx=3545, majf=0, minf=15 00:12:24.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:24.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.251 issued rwts: total=1536,2009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:24.251 job1: (groupid=0, jobs=1): err= 0: pid=66359: Mon Nov 4 17:13:24 2024 00:12:24.251 read: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(9.81MiB/1001msec) 00:12:24.251 slat (nsec): min=10886, max=68166, avg=15831.41, stdev=6039.15 00:12:24.251 clat (usec): min=152, max=786, avg=206.32, stdev=28.57 00:12:24.251 lat (usec): min=166, max=813, avg=222.15, stdev=29.82 00:12:24.251 clat percentiles (usec): 00:12:24.251 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:12:24.252 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:12:24.252 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 251], 00:12:24.252 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 502], 99.95th=[ 627], 00:12:24.252 | 99.99th=[ 783] 00:12:24.252 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:24.252 slat (usec): min=16, max=126, avg=23.37, stdev= 8.60 00:12:24.252 clat (usec): min=102, max=295, avg=145.53, stdev=22.83 00:12:24.252 lat (usec): min=122, max=324, avg=168.90, stdev=25.82 00:12:24.252 clat percentiles (usec): 00:12:24.252 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 126], 00:12:24.252 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 147], 00:12:24.252 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 178], 95.00th=[ 190], 00:12:24.252 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 245], 99.95th=[ 269], 00:12:24.252 | 99.99th=[ 297] 00:12:24.252 bw ( KiB/s): min=12184, max=12184, per=33.37%, avg=12184.00, stdev= 0.00, samples=1 00:12:24.252 iops : min= 3046, max= 3046, avg=3046.00, stdev= 0.00, samples=1 00:12:24.252 lat (usec) : 250=97.50%, 500=2.44%, 750=0.04%, 1000=0.02% 00:12:24.252 cpu : usr=1.30%, sys=8.60%, ctx=5072, majf=0, minf=9 00:12:24.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:24.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.252 issued rwts: total=2512,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:24.252 job2: (groupid=0, jobs=1): err= 0: pid=66360: Mon Nov 4 17:13:24 2024 00:12:24.252 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:24.252 slat (nsec): min=7496, max=44953, avg=11437.85, stdev=4756.47 00:12:24.252 clat (usec): min=230, max=396, avg=303.43, stdev=29.69 00:12:24.252 lat (usec): min=243, max=407, avg=314.87, stdev=30.11 00:12:24.252 clat percentiles (usec): 00:12:24.252 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 281], 00:12:24.252 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:12:24.252 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 359], 00:12:24.252 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 392], 99.95th=[ 396], 00:12:24.252 | 99.99th=[ 396] 00:12:24.252 write: IOPS=2004, BW=8020KiB/s (8212kB/s)(8028KiB/1001msec); 0 zone resets 00:12:24.252 slat (nsec): min=10082, max=78860, avg=22119.76, stdev=6797.57 00:12:24.252 clat (usec): min=170, max=556, avg=232.11, stdev=28.71 00:12:24.252 lat (usec): min=189, max=575, avg=254.23, stdev=30.20 00:12:24.252 clat percentiles (usec): 00:12:24.252 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 208], 00:12:24.252 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:12:24.252 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 281], 00:12:24.252 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 334], 99.95th=[ 343], 00:12:24.252 | 99.99th=[ 553] 00:12:24.252 bw ( KiB/s): min= 8192, max= 8192, per=22.44%, avg=8192.00, stdev= 0.00, samples=1 00:12:24.252 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:24.252 lat (usec) : 250=43.30%, 500=56.68%, 750=0.03% 00:12:24.252 cpu : usr=1.80%, sys=4.90%, ctx=3544, majf=0, minf=11 00:12:24.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:24.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.252 issued rwts: total=1536,2007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:24.252 job3: (groupid=0, jobs=1): err= 0: pid=66361: Mon Nov 4 17:13:24 2024 00:12:24.252 read: IOPS=2225, BW=8903KiB/s (9117kB/s)(8912KiB/1001msec) 00:12:24.252 slat (nsec): min=10746, max=56509, avg=14807.77, stdev=4813.54 00:12:24.252 clat (usec): min=162, max=2090, avg=215.20, stdev=47.54 00:12:24.252 lat (usec): min=176, max=2105, avg=230.01, stdev=47.86 00:12:24.252 clat percentiles (usec): 00:12:24.252 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:12:24.252 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:12:24.252 | 70.00th=[ 225], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 260], 00:12:24.252 | 99.00th=[ 293], 99.50th=[ 318], 99.90th=[ 388], 99.95th=[ 498], 00:12:24.252 | 99.99th=[ 2089] 00:12:24.252 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:24.252 slat (nsec): min=17021, max=88467, avg=26566.12, stdev=8701.98 00:12:24.252 clat (usec): min=107, max=1739, avg=160.18, stdev=42.91 00:12:24.252 lat (usec): min=126, max=1758, avg=186.75, stdev=44.78 00:12:24.252 clat percentiles (usec): 00:12:24.252 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 139], 00:12:24.252 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 161], 00:12:24.252 | 70.00th=[ 167], 80.00th=[ 178], 90.00th=[ 192], 95.00th=[ 204], 00:12:24.252 | 99.00th=[ 241], 99.50th=[ 265], 99.90th=[ 420], 99.95th=[ 906], 00:12:24.252 | 99.99th=[ 1745] 00:12:24.252 bw ( KiB/s): min=10568, max=10568, per=28.95%, avg=10568.00, stdev= 0.00, samples=1 00:12:24.252 iops : min= 2642, max= 2642, avg=2642.00, stdev= 0.00, samples=1 00:12:24.252 lat (usec) : 250=95.68%, 500=4.26%, 1000=0.02% 00:12:24.252 lat (msec) : 2=0.02%, 4=0.02% 00:12:24.252 cpu : usr=2.30%, sys=7.90%, ctx=4789, majf=0, minf=11 00:12:24.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:24.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.252 issued rwts: total=2228,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:24.252 00:12:24.252 Run status group 0 (all jobs): 00:12:24.252 READ: bw=30.5MiB/s (32.0MB/s), 6138KiB/s-9.80MiB/s (6285kB/s-10.3MB/s), io=30.5MiB (32.0MB), run=1001-1001msec 00:12:24.252 WRITE: bw=35.7MiB/s (37.4MB/s), 8020KiB/s-9.99MiB/s (8212kB/s-10.5MB/s), io=35.7MiB (37.4MB), run=1001-1001msec 00:12:24.252 00:12:24.252 Disk stats (read/write): 00:12:24.252 nvme0n1: ios=1532/1536, merge=0/0, ticks=451/340, in_queue=791, util=88.78% 00:12:24.252 nvme0n2: ios=2097/2362, merge=0/0, ticks=461/373, in_queue=834, util=89.99% 00:12:24.252 nvme0n3: ios=1481/1536, merge=0/0, ticks=417/369, in_queue=786, util=89.31% 00:12:24.252 nvme0n4: ios=2054/2053, merge=0/0, ticks=453/354, in_queue=807, util=90.07% 00:12:24.252 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:24.252 [global] 00:12:24.252 thread=1 00:12:24.252 invalidate=1 00:12:24.252 rw=write 00:12:24.252 time_based=1 00:12:24.252 runtime=1 00:12:24.252 ioengine=libaio 00:12:24.252 direct=1 00:12:24.252 bs=4096 00:12:24.252 iodepth=128 00:12:24.252 norandommap=0 00:12:24.252 numjobs=1 00:12:24.252 00:12:24.252 verify_dump=1 00:12:24.252 verify_backlog=512 00:12:24.252 verify_state_save=0 00:12:24.252 do_verify=1 00:12:24.252 verify=crc32c-intel 00:12:24.252 [job0] 00:12:24.252 filename=/dev/nvme0n1 00:12:24.252 [job1] 00:12:24.252 filename=/dev/nvme0n2 00:12:24.252 [job2] 00:12:24.252 filename=/dev/nvme0n3 00:12:24.252 [job3] 00:12:24.252 filename=/dev/nvme0n4 00:12:24.252 Could not set queue depth (nvme0n1) 00:12:24.252 Could not set queue depth (nvme0n2) 00:12:24.252 Could not set queue depth (nvme0n3) 00:12:24.252 Could not set queue depth (nvme0n4) 00:12:24.252 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:24.252 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:24.252 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:24.252 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:24.252 fio-3.35 00:12:24.252 Starting 4 threads 00:12:25.628 00:12:25.628 job0: (groupid=0, jobs=1): err= 0: pid=66422: Mon Nov 4 17:13:26 2024 00:12:25.628 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:12:25.628 slat (usec): min=5, max=6458, avg=152.56, stdev=773.92 00:12:25.628 clat (usec): min=13481, max=26982, avg=20167.85, stdev=2887.92 00:12:25.628 lat (usec): min=16536, max=26997, avg=20320.41, stdev=2804.23 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[14353], 5.00th=[17433], 10.00th=[17695], 20.00th=[17957], 00:12:25.628 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19006], 60.00th=[19530], 00:12:25.628 | 70.00th=[20055], 80.00th=[23725], 90.00th=[25035], 95.00th=[25822], 00:12:25.628 | 99.00th=[26870], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:12:25.628 | 99.99th=[26870] 00:12:25.628 write: IOPS=3379, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1004msec); 0 zone resets 00:12:25.628 slat (usec): min=10, max=5881, avg=149.32, stdev=689.53 00:12:25.628 clat (usec): min=573, max=26428, avg=19051.13, stdev=2650.77 00:12:25.628 lat (usec): min=4722, max=26462, avg=19200.44, stdev=2569.40 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[ 9765], 5.00th=[15795], 10.00th=[17433], 20.00th=[17957], 00:12:25.628 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:12:25.628 | 70.00th=[19268], 80.00th=[20317], 90.00th=[22414], 95.00th=[23725], 00:12:25.628 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:12:25.628 | 99.99th=[26346] 00:12:25.628 bw ( KiB/s): min=12800, max=13320, per=28.36%, avg=13060.00, stdev=367.70, samples=2 00:12:25.628 iops : min= 3200, max= 3330, avg=3265.00, stdev=91.92, samples=2 00:12:25.628 lat (usec) : 750=0.02% 00:12:25.628 lat (msec) : 10=0.59%, 20=71.99%, 50=27.41% 00:12:25.628 cpu : usr=2.79%, sys=10.47%, ctx=204, majf=0, minf=19 00:12:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.628 issued rwts: total=3072,3393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.628 job1: (groupid=0, jobs=1): err= 0: pid=66423: Mon Nov 4 17:13:26 2024 00:12:25.628 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:12:25.628 slat (usec): min=5, max=4417, avg=115.63, stdev=466.31 00:12:25.628 clat (usec): min=11910, max=19570, avg=15490.65, stdev=1149.22 00:12:25.628 lat (usec): min=11934, max=20114, avg=15606.28, stdev=1208.53 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[12387], 5.00th=[13698], 10.00th=[14091], 20.00th=[14615], 00:12:25.628 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:12:25.628 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17957], 00:12:25.628 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:12:25.628 | 99.99th=[19530] 00:12:25.628 write: IOPS=4252, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec); 0 zone resets 00:12:25.628 slat (usec): min=10, max=5445, avg=114.51, stdev=561.60 00:12:25.628 clat (usec): min=3788, max=22058, avg=14820.94, stdev=1695.65 00:12:25.628 lat (usec): min=4600, max=22108, avg=14935.45, stdev=1772.67 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[10290], 5.00th=[12780], 10.00th=[13304], 20.00th=[13698], 00:12:25.628 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:12:25.628 | 70.00th=[15533], 80.00th=[16188], 90.00th=[16909], 95.00th=[17433], 00:12:25.628 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21103], 99.95th=[21365], 00:12:25.628 | 99.99th=[22152] 00:12:25.628 bw ( KiB/s): min=16416, max=16784, per=36.04%, avg=16600.00, stdev=260.22, samples=2 00:12:25.628 iops : min= 4104, max= 4196, avg=4150.00, stdev=65.05, samples=2 00:12:25.628 lat (msec) : 4=0.01%, 10=0.48%, 20=99.25%, 50=0.26% 00:12:25.628 cpu : usr=4.38%, sys=12.75%, ctx=313, majf=0, minf=7 00:12:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.628 issued rwts: total=4096,4274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.628 job2: (groupid=0, jobs=1): err= 0: pid=66424: Mon Nov 4 17:13:26 2024 00:12:25.628 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:12:25.628 slat (usec): min=6, max=21079, avg=335.90, stdev=1701.02 00:12:25.628 clat (usec): min=25913, max=86450, avg=42402.94, stdev=11748.61 00:12:25.628 lat (usec): min=27632, max=86474, avg=42738.83, stdev=11734.55 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[30016], 5.00th=[31589], 10.00th=[32637], 20.00th=[34866], 00:12:25.628 | 30.00th=[36439], 40.00th=[38011], 50.00th=[39060], 60.00th=[40633], 00:12:25.628 | 70.00th=[41681], 80.00th=[43779], 90.00th=[62653], 95.00th=[73925], 00:12:25.628 | 99.00th=[86508], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:12:25.628 | 99.99th=[86508] 00:12:25.628 write: IOPS=1846, BW=7387KiB/s (7564kB/s)(7424KiB/1005msec); 0 zone resets 00:12:25.628 slat (usec): min=13, max=13094, avg=252.91, stdev=1157.12 00:12:25.628 clat (usec): min=3406, max=52064, avg=32681.33, stdev=7520.99 00:12:25.628 lat (usec): min=5567, max=52090, avg=32934.24, stdev=7474.92 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[11338], 5.00th=[24773], 10.00th=[26608], 20.00th=[28181], 00:12:25.628 | 30.00th=[29492], 40.00th=[30278], 50.00th=[31065], 60.00th=[32113], 00:12:25.628 | 70.00th=[33817], 80.00th=[35914], 90.00th=[47449], 95.00th=[49546], 00:12:25.628 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:12:25.628 | 99.99th=[52167] 00:12:25.628 bw ( KiB/s): min= 6136, max= 7703, per=15.02%, avg=6919.50, stdev=1108.04, samples=2 00:12:25.628 iops : min= 1534, max= 1925, avg=1729.50, stdev=276.48, samples=2 00:12:25.628 lat (msec) : 4=0.03%, 10=0.44%, 20=0.97%, 50=90.77%, 100=7.78% 00:12:25.628 cpu : usr=1.89%, sys=5.68%, ctx=417, majf=0, minf=17 00:12:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:12:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.628 issued rwts: total=1536,1856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.628 job3: (groupid=0, jobs=1): err= 0: pid=66425: Mon Nov 4 17:13:26 2024 00:12:25.628 read: IOPS=1948, BW=7793KiB/s (7980kB/s)(7832KiB/1005msec) 00:12:25.628 slat (usec): min=6, max=10514, avg=276.65, stdev=1143.99 00:12:25.628 clat (usec): min=3421, max=51693, avg=35656.62, stdev=6045.41 00:12:25.628 lat (usec): min=5949, max=51728, avg=35933.27, stdev=6024.59 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[12125], 5.00th=[25035], 10.00th=[28967], 20.00th=[31327], 00:12:25.628 | 30.00th=[33817], 40.00th=[34866], 50.00th=[36439], 60.00th=[38011], 00:12:25.628 | 70.00th=[39060], 80.00th=[40633], 90.00th=[42206], 95.00th=[43254], 00:12:25.628 | 99.00th=[45351], 99.50th=[45351], 99.90th=[47449], 99.95th=[51643], 00:12:25.628 | 99.99th=[51643] 00:12:25.628 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:12:25.628 slat (usec): min=12, max=15686, avg=215.14, stdev=1101.39 00:12:25.628 clat (usec): min=13445, max=41240, avg=27543.44, stdev=4915.90 00:12:25.628 lat (usec): min=13483, max=41315, avg=27758.59, stdev=4999.78 00:12:25.628 clat percentiles (usec): 00:12:25.628 | 1.00th=[18220], 5.00th=[21365], 10.00th=[21627], 20.00th=[21890], 00:12:25.628 | 30.00th=[23987], 40.00th=[25560], 50.00th=[27657], 60.00th=[28967], 00:12:25.628 | 70.00th=[30278], 80.00th=[31851], 90.00th=[33817], 95.00th=[36439], 00:12:25.628 | 99.00th=[39060], 99.50th=[40109], 99.90th=[40109], 99.95th=[40633], 00:12:25.628 | 99.99th=[41157] 00:12:25.628 bw ( KiB/s): min= 8184, max= 8216, per=17.81%, avg=8200.00, stdev=22.63, samples=2 00:12:25.628 iops : min= 2046, max= 2054, avg=2050.00, stdev= 5.66, samples=2 00:12:25.628 lat (msec) : 4=0.02%, 10=0.17%, 20=1.77%, 50=98.00%, 100=0.02% 00:12:25.628 cpu : usr=2.39%, sys=6.57%, ctx=395, majf=0, minf=9 00:12:25.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:25.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.628 issued rwts: total=1958,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.628 00:12:25.628 Run status group 0 (all jobs): 00:12:25.628 READ: bw=41.4MiB/s (43.5MB/s), 6113KiB/s-15.9MiB/s (6260kB/s-16.7MB/s), io=41.6MiB (43.7MB), run=1004-1005msec 00:12:25.628 WRITE: bw=45.0MiB/s (47.2MB/s), 7387KiB/s-16.6MiB/s (7564kB/s-17.4MB/s), io=45.2MiB (47.4MB), run=1004-1005msec 00:12:25.628 00:12:25.629 Disk stats (read/write): 00:12:25.629 nvme0n1: ios=2610/3008, merge=0/0, ticks=12031/13015, in_queue=25046, util=89.48% 00:12:25.629 nvme0n2: ios=3633/3674, merge=0/0, ticks=17724/15460, in_queue=33184, util=89.81% 00:12:25.629 nvme0n3: ios=1354/1536, merge=0/0, ticks=14396/11736, in_queue=26132, util=89.36% 00:12:25.629 nvme0n4: ios=1570/2035, merge=0/0, ticks=22671/17978, in_queue=40649, util=90.23% 00:12:25.629 17:13:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:25.629 [global] 00:12:25.629 thread=1 00:12:25.629 invalidate=1 00:12:25.629 rw=randwrite 00:12:25.629 time_based=1 00:12:25.629 runtime=1 00:12:25.629 ioengine=libaio 00:12:25.629 direct=1 00:12:25.629 bs=4096 00:12:25.629 iodepth=128 00:12:25.629 norandommap=0 00:12:25.629 numjobs=1 00:12:25.629 00:12:25.629 verify_dump=1 00:12:25.629 verify_backlog=512 00:12:25.629 verify_state_save=0 00:12:25.629 do_verify=1 00:12:25.629 verify=crc32c-intel 00:12:25.629 [job0] 00:12:25.629 filename=/dev/nvme0n1 00:12:25.629 [job1] 00:12:25.629 filename=/dev/nvme0n2 00:12:25.629 [job2] 00:12:25.629 filename=/dev/nvme0n3 00:12:25.629 [job3] 00:12:25.629 filename=/dev/nvme0n4 00:12:25.629 Could not set queue depth (nvme0n1) 00:12:25.629 Could not set queue depth (nvme0n2) 00:12:25.629 Could not set queue depth (nvme0n3) 00:12:25.629 Could not set queue depth (nvme0n4) 00:12:25.629 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.629 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.629 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.629 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.629 fio-3.35 00:12:25.629 Starting 4 threads 00:12:27.004 00:12:27.004 job0: (groupid=0, jobs=1): err= 0: pid=66478: Mon Nov 4 17:13:27 2024 00:12:27.004 read: IOPS=4370, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1006msec) 00:12:27.004 slat (usec): min=7, max=17994, avg=105.86, stdev=747.90 00:12:27.004 clat (usec): min=4929, max=42100, avg=14640.54, stdev=4682.58 00:12:27.004 lat (usec): min=4955, max=45280, avg=14746.40, stdev=4737.06 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[ 5604], 5.00th=[11338], 10.00th=[12387], 20.00th=[12780], 00:12:27.004 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:12:27.004 | 70.00th=[13566], 80.00th=[14091], 90.00th=[25035], 95.00th=[26346], 00:12:27.004 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31589], 99.95th=[40633], 00:12:27.004 | 99.99th=[42206] 00:12:27.004 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:12:27.004 slat (usec): min=11, max=15873, avg=108.07, stdev=709.53 00:12:27.004 clat (usec): min=6274, max=31027, avg=13675.51, stdev=4511.01 00:12:27.004 lat (usec): min=8462, max=31052, avg=13783.59, stdev=4497.87 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[ 8225], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:12:27.004 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:12:27.004 | 70.00th=[12780], 80.00th=[13042], 90.00th=[20317], 95.00th=[25560], 00:12:27.004 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:12:27.004 | 99.99th=[31065] 00:12:27.004 bw ( KiB/s): min=16384, max=20480, per=31.30%, avg=18432.00, stdev=2896.31, samples=2 00:12:27.004 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:12:27.004 lat (msec) : 10=3.94%, 20=84.54%, 50=11.52% 00:12:27.004 cpu : usr=3.88%, sys=13.03%, ctx=192, majf=0, minf=15 00:12:27.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:27.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:27.004 issued rwts: total=4397,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:27.004 job1: (groupid=0, jobs=1): err= 0: pid=66479: Mon Nov 4 17:13:27 2024 00:12:27.004 read: IOPS=2152, BW=8611KiB/s (8818kB/s)(8680KiB/1008msec) 00:12:27.004 slat (usec): min=7, max=12218, avg=214.26, stdev=877.87 00:12:27.004 clat (usec): min=6151, max=55904, avg=26900.10, stdev=4685.07 00:12:27.004 lat (usec): min=7125, max=61593, avg=27114.36, stdev=4677.62 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[ 7373], 5.00th=[21103], 10.00th=[22938], 20.00th=[25035], 00:12:27.004 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[27132], 00:12:27.004 | 70.00th=[27657], 80.00th=[29230], 90.00th=[32637], 95.00th=[34341], 00:12:27.004 | 99.00th=[38011], 99.50th=[39584], 99.90th=[55837], 99.95th=[55837], 00:12:27.004 | 99.99th=[55837] 00:12:27.004 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:12:27.004 slat (usec): min=9, max=26223, avg=201.48, stdev=988.34 00:12:27.004 clat (usec): min=6026, max=40969, avg=27061.67, stdev=4628.89 00:12:27.004 lat (usec): min=6047, max=40985, avg=27263.15, stdev=4565.60 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[11600], 5.00th=[20579], 10.00th=[22152], 20.00th=[23725], 00:12:27.004 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[28181], 00:12:27.004 | 70.00th=[29230], 80.00th=[30016], 90.00th=[31851], 95.00th=[34341], 00:12:27.004 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:27.004 | 99.99th=[41157] 00:12:27.004 bw ( KiB/s): min= 9848, max=10605, per=17.37%, avg=10226.50, stdev=535.28, samples=2 00:12:27.004 iops : min= 2462, max= 2651, avg=2556.50, stdev=133.64, samples=2 00:12:27.004 lat (msec) : 10=0.97%, 20=2.66%, 50=96.19%, 100=0.17% 00:12:27.004 cpu : usr=2.48%, sys=6.85%, ctx=678, majf=0, minf=13 00:12:27.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:27.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:27.004 issued rwts: total=2170,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:27.004 job2: (groupid=0, jobs=1): err= 0: pid=66480: Mon Nov 4 17:13:27 2024 00:12:27.004 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:12:27.004 slat (usec): min=5, max=8013, avg=176.06, stdev=704.93 00:12:27.004 clat (usec): min=11096, max=34799, avg=23037.92, stdev=6187.44 00:12:27.004 lat (usec): min=13798, max=34812, avg=23213.97, stdev=6212.01 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[12256], 5.00th=[14353], 10.00th=[14484], 20.00th=[14746], 00:12:27.004 | 30.00th=[15401], 40.00th=[24773], 50.00th=[26084], 60.00th=[26608], 00:12:27.004 | 70.00th=[26870], 80.00th=[27919], 90.00th=[29492], 95.00th=[31065], 00:12:27.004 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:12:27.004 | 99.99th=[34866] 00:12:27.004 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:12:27.004 slat (usec): min=12, max=7950, avg=171.75, stdev=645.56 00:12:27.004 clat (usec): min=5601, max=33493, avg=22201.78, stdev=6588.07 00:12:27.004 lat (usec): min=5617, max=33510, avg=22373.53, stdev=6612.16 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[11076], 5.00th=[13698], 10.00th=[13960], 20.00th=[14091], 00:12:27.004 | 30.00th=[14615], 40.00th=[20841], 50.00th=[25297], 60.00th=[26346], 00:12:27.004 | 70.00th=[27395], 80.00th=[28705], 90.00th=[29492], 95.00th=[30016], 00:12:27.004 | 99.00th=[31065], 99.50th=[31065], 99.90th=[33424], 99.95th=[33424], 00:12:27.004 | 99.99th=[33424] 00:12:27.004 bw ( KiB/s): min=10544, max=12969, per=19.96%, avg=11756.50, stdev=1714.73, samples=2 00:12:27.004 iops : min= 2636, max= 3242, avg=2939.00, stdev=428.51, samples=2 00:12:27.004 lat (msec) : 10=0.28%, 20=36.00%, 50=63.72% 00:12:27.004 cpu : usr=2.99%, sys=8.46%, ctx=628, majf=0, minf=13 00:12:27.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:27.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:27.004 issued rwts: total=2560,3063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:27.004 job3: (groupid=0, jobs=1): err= 0: pid=66481: Mon Nov 4 17:13:27 2024 00:12:27.004 read: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1004msec) 00:12:27.004 slat (usec): min=5, max=11620, avg=106.30, stdev=662.97 00:12:27.004 clat (usec): min=1921, max=27765, avg=14426.81, stdev=2473.99 00:12:27.004 lat (usec): min=4637, max=27777, avg=14533.11, stdev=2489.74 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[13304], 20.00th=[13960], 00:12:27.004 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:12:27.004 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:12:27.004 | 99.00th=[25035], 99.50th=[26084], 99.90th=[27657], 99.95th=[27657], 00:12:27.004 | 99.99th=[27657] 00:12:27.004 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:12:27.004 slat (usec): min=3, max=10204, avg=104.24, stdev=609.40 00:12:27.004 clat (usec): min=3211, max=27703, avg=13349.68, stdev=1696.23 00:12:27.004 lat (usec): min=3230, max=27713, avg=13453.92, stdev=1613.65 00:12:27.004 clat percentiles (usec): 00:12:27.004 | 1.00th=[ 6390], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:12:27.004 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:12:27.004 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14615], 95.00th=[15401], 00:12:27.004 | 99.00th=[18744], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:12:27.004 | 99.99th=[27657] 00:12:27.004 bw ( KiB/s): min=17032, max=19871, per=31.33%, avg=18451.50, stdev=2007.48, samples=2 00:12:27.004 iops : min= 4258, max= 4967, avg=4612.50, stdev=501.34, samples=2 00:12:27.004 lat (msec) : 2=0.01%, 4=0.15%, 10=4.57%, 20=93.23%, 50=2.03% 00:12:27.004 cpu : usr=3.29%, sys=13.46%, ctx=248, majf=0, minf=11 00:12:27.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:27.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:27.004 issued rwts: total=4554,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:27.004 00:12:27.004 Run status group 0 (all jobs): 00:12:27.004 READ: bw=53.0MiB/s (55.6MB/s), 8611KiB/s-17.7MiB/s (8818kB/s-18.6MB/s), io=53.4MiB (56.0MB), run=1004-1008msec 00:12:27.004 WRITE: bw=57.5MiB/s (60.3MB/s), 9.92MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=58.0MiB (60.8MB), run=1004-1008msec 00:12:27.004 00:12:27.004 Disk stats (read/write): 00:12:27.004 nvme0n1: ios=3634/3838, merge=0/0, ticks=51320/50068, in_queue=101388, util=88.38% 00:12:27.004 nvme0n2: ios=2036/2048, merge=0/0, ticks=23014/22945, in_queue=45959, util=88.47% 00:12:27.004 nvme0n3: ios=2375/2560, merge=0/0, ticks=12685/12353, in_queue=25038, util=88.65% 00:12:27.004 nvme0n4: ios=3651/4096, merge=0/0, ticks=50498/51107, in_queue=101605, util=89.68% 00:12:27.004 17:13:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:27.004 17:13:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66494 00:12:27.004 17:13:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:27.004 17:13:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:27.004 [global] 00:12:27.004 thread=1 00:12:27.004 invalidate=1 00:12:27.004 rw=read 00:12:27.004 time_based=1 00:12:27.004 runtime=10 00:12:27.004 ioengine=libaio 00:12:27.004 direct=1 00:12:27.004 bs=4096 00:12:27.004 iodepth=1 00:12:27.004 norandommap=1 00:12:27.004 numjobs=1 00:12:27.004 00:12:27.004 [job0] 00:12:27.004 filename=/dev/nvme0n1 00:12:27.004 [job1] 00:12:27.004 filename=/dev/nvme0n2 00:12:27.004 [job2] 00:12:27.004 filename=/dev/nvme0n3 00:12:27.004 [job3] 00:12:27.004 filename=/dev/nvme0n4 00:12:27.004 Could not set queue depth (nvme0n1) 00:12:27.004 Could not set queue depth (nvme0n2) 00:12:27.005 Could not set queue depth (nvme0n3) 00:12:27.005 Could not set queue depth (nvme0n4) 00:12:27.005 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:27.005 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:27.005 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:27.005 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:27.005 fio-3.35 00:12:27.005 Starting 4 threads 00:12:30.288 17:13:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:30.288 fio: pid=66541, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:30.288 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33804288, buflen=4096 00:12:30.288 17:13:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:30.288 fio: pid=66540, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:30.288 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=35926016, buflen=4096 00:12:30.547 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:30.547 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:30.547 fio: pid=66538, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:30.547 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=64663552, buflen=4096 00:12:30.804 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:30.804 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:31.063 fio: pid=66539, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:31.063 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57303040, buflen=4096 00:12:31.063 00:12:31.063 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66538: Mon Nov 4 17:13:31 2024 00:12:31.063 read: IOPS=4597, BW=18.0MiB/s (18.8MB/s)(61.7MiB/3434msec) 00:12:31.063 slat (usec): min=11, max=9841, avg=16.08, stdev=135.06 00:12:31.063 clat (usec): min=129, max=2829, avg=200.09, stdev=50.16 00:12:31.063 lat (usec): min=141, max=10024, avg=216.18, stdev=143.98 00:12:31.063 clat percentiles (usec): 00:12:31.063 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:12:31.063 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 204], 00:12:31.063 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 245], 95.00th=[ 258], 00:12:31.063 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 498], 99.95th=[ 1074], 00:12:31.063 | 99.99th=[ 2540] 00:12:31.063 bw ( KiB/s): min=17568, max=18464, per=35.77%, avg=18037.33, stdev=296.74, samples=6 00:12:31.063 iops : min= 4392, max= 4616, avg=4509.33, stdev=74.19, samples=6 00:12:31.063 lat (usec) : 250=92.39%, 500=7.51%, 750=0.04%, 1000=0.01% 00:12:31.063 lat (msec) : 2=0.04%, 4=0.01% 00:12:31.063 cpu : usr=1.05%, sys=5.68%, ctx=15796, majf=0, minf=1 00:12:31.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.063 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.063 issued rwts: total=15788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.063 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66539: Mon Nov 4 17:13:31 2024 00:12:31.063 read: IOPS=3768, BW=14.7MiB/s (15.4MB/s)(54.6MiB/3713msec) 00:12:31.063 slat (usec): min=9, max=15325, avg=20.41, stdev=227.50 00:12:31.063 clat (usec): min=129, max=4191, avg=243.39, stdev=76.66 00:12:31.063 lat (usec): min=144, max=15616, avg=263.80, stdev=240.30 00:12:31.063 clat percentiles (usec): 00:12:31.063 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 215], 00:12:31.063 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:12:31.063 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 302], 00:12:31.063 | 99.00th=[ 334], 99.50th=[ 375], 99.90th=[ 988], 99.95th=[ 1942], 00:12:31.063 | 99.99th=[ 2802] 00:12:31.063 bw ( KiB/s): min=14104, max=17742, per=29.38%, avg=14814.57, stdev=1301.45, samples=7 00:12:31.063 iops : min= 3526, max= 4435, avg=3703.57, stdev=325.18, samples=7 00:12:31.063 lat (usec) : 250=50.66%, 500=49.08%, 750=0.10%, 1000=0.06% 00:12:31.063 lat (msec) : 2=0.06%, 4=0.03%, 10=0.01% 00:12:31.063 cpu : usr=1.21%, sys=5.17%, ctx=13999, majf=0, minf=1 00:12:31.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.063 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.063 issued rwts: total=13991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.063 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66540: Mon Nov 4 17:13:31 2024 00:12:31.063 read: IOPS=2755, BW=10.8MiB/s (11.3MB/s)(34.3MiB/3184msec) 00:12:31.063 slat (usec): min=9, max=11427, avg=23.66, stdev=159.88 00:12:31.063 clat (usec): min=140, max=5317, avg=337.32, stdev=97.43 00:12:31.063 lat (usec): min=157, max=11720, avg=360.98, stdev=187.30 00:12:31.063 clat percentiles (usec): 00:12:31.063 | 1.00th=[ 178], 5.00th=[ 241], 10.00th=[ 265], 20.00th=[ 289], 00:12:31.063 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 343], 00:12:31.063 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 482], 00:12:31.063 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 1057], 99.95th=[ 1467], 00:12:31.063 | 99.99th=[ 5342] 00:12:31.063 bw ( KiB/s): min=10256, max=11552, per=21.56%, avg=10869.33, stdev=458.80, samples=6 00:12:31.063 iops : min= 2564, max= 2888, avg=2717.33, stdev=114.70, samples=6 00:12:31.063 lat (usec) : 250=6.71%, 500=89.23%, 750=3.82%, 1000=0.13% 00:12:31.063 lat (msec) : 2=0.07%, 4=0.02%, 10=0.01% 00:12:31.063 cpu : usr=1.04%, sys=5.18%, ctx=8774, majf=0, minf=1 00:12:31.063 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.063 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.063 issued rwts: total=8772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.063 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.063 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66541: Mon Nov 4 17:13:31 2024 00:12:31.063 read: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(32.2MiB/2926msec) 00:12:31.063 slat (usec): min=13, max=104, avg=18.28, stdev= 4.60 00:12:31.063 clat (usec): min=171, max=2269, avg=333.92, stdev=59.86 00:12:31.063 lat (usec): min=187, max=2294, avg=352.20, stdev=59.96 00:12:31.063 clat percentiles (usec): 00:12:31.063 | 1.00th=[ 208], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 297], 00:12:31.063 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 343], 00:12:31.063 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 408], 00:12:31.063 | 99.00th=[ 457], 99.50th=[ 523], 99.90th=[ 930], 99.95th=[ 1057], 00:12:31.063 | 99.99th=[ 2278] 00:12:31.063 bw ( KiB/s): min=11096, max=11464, per=22.44%, avg=11313.60, stdev=152.57, samples=5 00:12:31.063 iops : min= 2774, max= 2866, avg=2828.40, stdev=38.14, samples=5 00:12:31.063 lat (usec) : 250=3.40%, 500=96.01%, 750=0.34%, 1000=0.16% 00:12:31.063 lat (msec) : 2=0.06%, 4=0.01% 00:12:31.064 cpu : usr=1.26%, sys=4.58%, ctx=8254, majf=0, minf=2 00:12:31.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.064 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.064 issued rwts: total=8254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.064 00:12:31.064 Run status group 0 (all jobs): 00:12:31.064 READ: bw=49.2MiB/s (51.6MB/s), 10.8MiB/s-18.0MiB/s (11.3MB/s-18.8MB/s), io=183MiB (192MB), run=2926-3713msec 00:12:31.064 00:12:31.064 Disk stats (read/write): 00:12:31.064 nvme0n1: ios=15293/0, merge=0/0, ticks=3137/0, in_queue=3137, util=94.79% 00:12:31.064 nvme0n2: ios=13362/0, merge=0/0, ticks=3361/0, in_queue=3361, util=94.80% 00:12:31.064 nvme0n3: ios=8454/0, merge=0/0, ticks=2936/0, in_queue=2936, util=96.09% 00:12:31.064 nvme0n4: ios=8038/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.74% 00:12:31.064 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:31.064 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:31.322 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:31.322 17:13:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:31.579 17:13:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:31.579 17:13:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:31.837 17:13:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:31.837 17:13:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:32.095 17:13:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:32.095 17:13:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66494 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.353 nvmf hotplug test: fio failed as expected 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:32.353 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.921 rmmod nvme_tcp 00:12:32.921 rmmod nvme_fabrics 00:12:32.921 rmmod nvme_keyring 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66119 ']' 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66119 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 66119 ']' 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 66119 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66119 00:12:32.921 killing process with pid 66119 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66119' 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 66119 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 66119 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.921 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:33.180 00:12:33.180 real 0m19.719s 00:12:33.180 user 1m13.861s 00:12:33.180 sys 0m9.829s 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:33.180 17:13:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.180 ************************************ 00:12:33.180 END TEST nvmf_fio_target 00:12:33.180 ************************************ 00:12:33.439 17:13:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:33.439 17:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:33.439 17:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:33.439 17:13:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:33.439 ************************************ 00:12:33.439 START TEST nvmf_bdevio 00:12:33.439 ************************************ 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:33.439 * Looking for test storage... 00:12:33.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:33.439 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.440 --rc genhtml_branch_coverage=1 00:12:33.440 --rc genhtml_function_coverage=1 00:12:33.440 --rc genhtml_legend=1 00:12:33.440 --rc geninfo_all_blocks=1 00:12:33.440 --rc geninfo_unexecuted_blocks=1 00:12:33.440 00:12:33.440 ' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.440 --rc genhtml_branch_coverage=1 00:12:33.440 --rc genhtml_function_coverage=1 00:12:33.440 --rc genhtml_legend=1 00:12:33.440 --rc geninfo_all_blocks=1 00:12:33.440 --rc geninfo_unexecuted_blocks=1 00:12:33.440 00:12:33.440 ' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.440 --rc genhtml_branch_coverage=1 00:12:33.440 --rc genhtml_function_coverage=1 00:12:33.440 --rc genhtml_legend=1 00:12:33.440 --rc geninfo_all_blocks=1 00:12:33.440 --rc geninfo_unexecuted_blocks=1 00:12:33.440 00:12:33.440 ' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.440 --rc genhtml_branch_coverage=1 00:12:33.440 --rc genhtml_function_coverage=1 00:12:33.440 --rc genhtml_legend=1 00:12:33.440 --rc geninfo_all_blocks=1 00:12:33.440 --rc geninfo_unexecuted_blocks=1 00:12:33.440 00:12:33.440 ' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.440 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.440 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:33.699 Cannot find device "nvmf_init_br" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:33.699 Cannot find device "nvmf_init_br2" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:33.699 Cannot find device "nvmf_tgt_br" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:33.699 Cannot find device "nvmf_tgt_br2" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:33.699 Cannot find device "nvmf_init_br" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:33.699 Cannot find device "nvmf_init_br2" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:33.699 Cannot find device "nvmf_tgt_br" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:33.699 Cannot find device "nvmf_tgt_br2" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:33.699 Cannot find device "nvmf_br" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:33.699 Cannot find device "nvmf_init_if" 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:33.699 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:33.699 Cannot find device "nvmf_init_if2" 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:33.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:33.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:33.700 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:33.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:33.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:33.958 00:12:33.958 --- 10.0.0.3 ping statistics --- 00:12:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.958 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:33.958 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:33.958 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:12:33.958 00:12:33.958 --- 10.0.0.4 ping statistics --- 00:12:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.958 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:33.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:33.958 00:12:33.958 --- 10.0.0.1 ping statistics --- 00:12:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.958 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:33.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:12:33.958 00:12:33.958 --- 10.0.0.2 ping statistics --- 00:12:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.958 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.958 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66865 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66865 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 66865 ']' 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:33.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:33.959 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.959 [2024-11-04 17:13:34.719582] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:12:33.959 [2024-11-04 17:13:34.719653] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.217 [2024-11-04 17:13:34.870605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.217 [2024-11-04 17:13:34.937347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.217 [2024-11-04 17:13:34.937409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.217 [2024-11-04 17:13:34.937423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.217 [2024-11-04 17:13:34.937434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.217 [2024-11-04 17:13:34.937443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.217 [2024-11-04 17:13:34.938999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:34.217 [2024-11-04 17:13:34.939142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:34.217 [2024-11-04 17:13:34.939280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:34.217 [2024-11-04 17:13:34.939284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.217 [2024-11-04 17:13:35.000420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 [2024-11-04 17:13:35.112821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 Malloc0 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:34.476 [2024-11-04 17:13:35.188057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:34.476 { 00:12:34.476 "params": { 00:12:34.476 "name": "Nvme$subsystem", 00:12:34.476 "trtype": "$TEST_TRANSPORT", 00:12:34.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:34.476 "adrfam": "ipv4", 00:12:34.476 "trsvcid": "$NVMF_PORT", 00:12:34.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:34.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:34.476 "hdgst": ${hdgst:-false}, 00:12:34.476 "ddgst": ${ddgst:-false} 00:12:34.476 }, 00:12:34.476 "method": "bdev_nvme_attach_controller" 00:12:34.476 } 00:12:34.476 EOF 00:12:34.476 )") 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:34.476 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:34.476 "params": { 00:12:34.476 "name": "Nvme1", 00:12:34.476 "trtype": "tcp", 00:12:34.476 "traddr": "10.0.0.3", 00:12:34.476 "adrfam": "ipv4", 00:12:34.476 "trsvcid": "4420", 00:12:34.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:34.476 "hdgst": false, 00:12:34.476 "ddgst": false 00:12:34.476 }, 00:12:34.476 "method": "bdev_nvme_attach_controller" 00:12:34.476 }' 00:12:34.476 [2024-11-04 17:13:35.244142] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:12:34.476 [2024-11-04 17:13:35.244223] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66894 ] 00:12:34.735 [2024-11-04 17:13:35.388509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:34.735 [2024-11-04 17:13:35.445120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.735 [2024-11-04 17:13:35.445258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.735 [2024-11-04 17:13:35.445259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.735 [2024-11-04 17:13:35.508024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:34.993 I/O targets: 00:12:34.993 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:34.993 00:12:34.993 00:12:34.993 CUnit - A unit testing framework for C - Version 2.1-3 00:12:34.993 http://cunit.sourceforge.net/ 00:12:34.993 00:12:34.993 00:12:34.993 Suite: bdevio tests on: Nvme1n1 00:12:34.993 Test: blockdev write read block ...passed 00:12:34.993 Test: blockdev write zeroes read block ...passed 00:12:34.993 Test: blockdev write zeroes read no split ...passed 00:12:34.993 Test: blockdev write zeroes read split ...passed 00:12:34.993 Test: blockdev write zeroes read split partial ...passed 00:12:34.994 Test: blockdev reset ...[2024-11-04 17:13:35.659251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:34.994 [2024-11-04 17:13:35.659357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170c180 (9): Bad file descriptor 00:12:34.994 [2024-11-04 17:13:35.674400] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:34.994 passed 00:12:34.994 Test: blockdev write read 8 blocks ...passed 00:12:34.994 Test: blockdev write read size > 128k ...passed 00:12:34.994 Test: blockdev write read invalid size ...passed 00:12:34.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:34.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:34.994 Test: blockdev write read max offset ...passed 00:12:34.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:34.994 Test: blockdev writev readv 8 blocks ...passed 00:12:34.994 Test: blockdev writev readv 30 x 1block ...passed 00:12:34.994 Test: blockdev writev readv block ...passed 00:12:34.994 Test: blockdev writev readv size > 128k ...passed 00:12:34.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:34.994 Test: blockdev comparev and writev ...[2024-11-04 17:13:35.682762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.682811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.682838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.682851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.683312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.683346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.683368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.683381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.683683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.683716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.683738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.683750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.684153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.684187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.684224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:34.994 [2024-11-04 17:13:35.684239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:34.994 passed 00:12:34.994 Test: blockdev nvme passthru rw ...passed 00:12:34.994 Test: blockdev nvme passthru vendor specific ...[2024-11-04 17:13:35.685172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.994 [2024-11-04 17:13:35.685200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.685360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.994 [2024-11-04 17:13:35.685396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.685522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.994 [2024-11-04 17:13:35.685548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:34.994 [2024-11-04 17:13:35.685668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:34.994 [2024-11-04 17:13:35.685692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:34.994 passed 00:12:34.994 Test: blockdev nvme admin passthru ...passed 00:12:34.994 Test: blockdev copy ...passed 00:12:34.994 00:12:34.994 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.994 suites 1 1 n/a 0 0 00:12:34.994 tests 23 23 23 0 0 00:12:34.994 asserts 152 152 152 0 n/a 00:12:34.994 00:12:34.994 Elapsed time = 0.150 seconds 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.253 rmmod nvme_tcp 00:12:35.253 rmmod nvme_fabrics 00:12:35.253 rmmod nvme_keyring 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66865 ']' 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66865 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 66865 ']' 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 66865 00:12:35.253 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:12:35.253 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:35.253 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66865 00:12:35.253 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:35.253 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:35.253 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66865' 00:12:35.253 killing process with pid 66865 00:12:35.253 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 66865 00:12:35.253 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 66865 00:12:35.512 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.512 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.512 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.512 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:35.512 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:35.512 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.513 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.513 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.513 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:35.513 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:35.513 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:35.513 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:35.771 00:12:35.771 real 0m2.479s 00:12:35.771 user 0m6.527s 00:12:35.771 sys 0m0.856s 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:35.771 ************************************ 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.771 END TEST nvmf_bdevio 00:12:35.771 ************************************ 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:35.771 00:12:35.771 real 2m32.733s 00:12:35.771 user 6m38.368s 00:12:35.771 sys 0m52.590s 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:35.771 ************************************ 00:12:35.771 END TEST nvmf_target_core 00:12:35.771 17:13:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:35.771 ************************************ 00:12:35.771 17:13:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:35.771 17:13:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:35.772 17:13:36 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:35.772 17:13:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.772 ************************************ 00:12:35.772 START TEST nvmf_target_extra 00:12:35.772 ************************************ 00:12:35.772 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:36.032 * Looking for test storage... 00:12:36.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.032 --rc genhtml_branch_coverage=1 00:12:36.032 --rc genhtml_function_coverage=1 00:12:36.032 --rc genhtml_legend=1 00:12:36.032 --rc geninfo_all_blocks=1 00:12:36.032 --rc geninfo_unexecuted_blocks=1 00:12:36.032 00:12:36.032 ' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.032 --rc genhtml_branch_coverage=1 00:12:36.032 --rc genhtml_function_coverage=1 00:12:36.032 --rc genhtml_legend=1 00:12:36.032 --rc geninfo_all_blocks=1 00:12:36.032 --rc geninfo_unexecuted_blocks=1 00:12:36.032 00:12:36.032 ' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.032 --rc genhtml_branch_coverage=1 00:12:36.032 --rc genhtml_function_coverage=1 00:12:36.032 --rc genhtml_legend=1 00:12:36.032 --rc geninfo_all_blocks=1 00:12:36.032 --rc geninfo_unexecuted_blocks=1 00:12:36.032 00:12:36.032 ' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:36.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.032 --rc genhtml_branch_coverage=1 00:12:36.032 --rc genhtml_function_coverage=1 00:12:36.032 --rc genhtml_legend=1 00:12:36.032 --rc geninfo_all_blocks=1 00:12:36.032 --rc geninfo_unexecuted_blocks=1 00:12:36.032 00:12:36.032 ' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.032 ************************************ 00:12:36.032 START TEST nvmf_auth_target 00:12:36.032 ************************************ 00:12:36.032 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:36.292 * Looking for test storage... 00:12:36.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:36.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.292 --rc genhtml_branch_coverage=1 00:12:36.292 --rc genhtml_function_coverage=1 00:12:36.292 --rc genhtml_legend=1 00:12:36.292 --rc geninfo_all_blocks=1 00:12:36.292 --rc geninfo_unexecuted_blocks=1 00:12:36.292 00:12:36.292 ' 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:36.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.292 --rc genhtml_branch_coverage=1 00:12:36.292 --rc genhtml_function_coverage=1 00:12:36.292 --rc genhtml_legend=1 00:12:36.292 --rc geninfo_all_blocks=1 00:12:36.292 --rc geninfo_unexecuted_blocks=1 00:12:36.292 00:12:36.292 ' 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:36.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.292 --rc genhtml_branch_coverage=1 00:12:36.292 --rc genhtml_function_coverage=1 00:12:36.292 --rc genhtml_legend=1 00:12:36.292 --rc geninfo_all_blocks=1 00:12:36.292 --rc geninfo_unexecuted_blocks=1 00:12:36.292 00:12:36.292 ' 00:12:36.292 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:36.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.293 --rc genhtml_branch_coverage=1 00:12:36.293 --rc genhtml_function_coverage=1 00:12:36.293 --rc genhtml_legend=1 00:12:36.293 --rc geninfo_all_blocks=1 00:12:36.293 --rc geninfo_unexecuted_blocks=1 00:12:36.293 00:12:36.293 ' 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.293 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.293 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:36.293 Cannot find device "nvmf_init_br" 00:12:36.293 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:36.293 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:36.293 Cannot find device "nvmf_init_br2" 00:12:36.293 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:36.293 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:36.293 Cannot find device "nvmf_tgt_br" 00:12:36.293 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:36.293 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.293 Cannot find device "nvmf_tgt_br2" 00:12:36.293 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:36.294 Cannot find device "nvmf_init_br" 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:36.294 Cannot find device "nvmf_init_br2" 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:36.294 Cannot find device "nvmf_tgt_br" 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:36.294 Cannot find device "nvmf_tgt_br2" 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:36.294 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:36.551 Cannot find device "nvmf_br" 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:36.551 Cannot find device "nvmf_init_if" 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:36.551 Cannot find device "nvmf_init_if2" 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:36.551 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:36.552 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:36.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:36.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:12:36.816 00:12:36.816 --- 10.0.0.3 ping statistics --- 00:12:36.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.816 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:36.816 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:36.816 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:12:36.816 00:12:36.816 --- 10.0.0.4 ping statistics --- 00:12:36.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.816 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:36.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:36.816 00:12:36.816 --- 10.0.0.1 ping statistics --- 00:12:36.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.816 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:36.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:12:36.816 00:12:36.816 --- 10.0.0.2 ping statistics --- 00:12:36.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.816 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67176 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67176 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67176 ']' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:36.816 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.084 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:37.084 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:37.084 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.084 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.084 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67200 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=01d3845b6d08c2e651012b2e78a6f9df864166b90fd8a90c 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.MUs 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 01d3845b6d08c2e651012b2e78a6f9df864166b90fd8a90c 0 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 01d3845b6d08c2e651012b2e78a6f9df864166b90fd8a90c 0 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=01d3845b6d08c2e651012b2e78a6f9df864166b90fd8a90c 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.MUs 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.MUs 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.MUs 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:37.342 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1aea13cc255bf91bacb30a451c03992451fb26fc19883a2bcc2a7f174333e84d 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Moq 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1aea13cc255bf91bacb30a451c03992451fb26fc19883a2bcc2a7f174333e84d 3 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1aea13cc255bf91bacb30a451c03992451fb26fc19883a2bcc2a7f174333e84d 3 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1aea13cc255bf91bacb30a451c03992451fb26fc19883a2bcc2a7f174333e84d 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:37.343 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Moq 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Moq 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Moq 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1bb26ab9e88945d45f220e32b90d33a2 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xbx 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1bb26ab9e88945d45f220e32b90d33a2 1 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1bb26ab9e88945d45f220e32b90d33a2 1 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1bb26ab9e88945d45f220e32b90d33a2 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xbx 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xbx 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.xbx 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=42990361d12cbe66c6c9fdfbdfef6fee92c92ed48a55e01d 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iO5 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 42990361d12cbe66c6c9fdfbdfef6fee92c92ed48a55e01d 2 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 42990361d12cbe66c6c9fdfbdfef6fee92c92ed48a55e01d 2 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=42990361d12cbe66c6c9fdfbdfef6fee92c92ed48a55e01d 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:37.343 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iO5 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iO5 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.iO5 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aa5d9d34fb24b78a98c78158d66d5c861d828cbeee8d9e70 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xad 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aa5d9d34fb24b78a98c78158d66d5c861d828cbeee8d9e70 2 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aa5d9d34fb24b78a98c78158d66d5c861d828cbeee8d9e70 2 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aa5d9d34fb24b78a98c78158d66d5c861d828cbeee8d9e70 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xad 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xad 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xad 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f2a26a7dc6656d36db8f4574689e54bd 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9bU 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f2a26a7dc6656d36db8f4574689e54bd 1 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f2a26a7dc6656d36db8f4574689e54bd 1 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f2a26a7dc6656d36db8f4574689e54bd 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9bU 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9bU 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.9bU 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a89bfa9b9acddfe8aa4570bda253a9fb0c59434ea97e8095b08a975f434a3f16 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.EMm 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a89bfa9b9acddfe8aa4570bda253a9fb0c59434ea97e8095b08a975f434a3f16 3 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a89bfa9b9acddfe8aa4570bda253a9fb0c59434ea97e8095b08a975f434a3f16 3 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a89bfa9b9acddfe8aa4570bda253a9fb0c59434ea97e8095b08a975f434a3f16 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.EMm 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.EMm 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.EMm 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67176 00:12:37.602 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67176 ']' 00:12:37.603 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.603 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:37.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.603 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.603 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:37.603 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67200 /var/tmp/host.sock 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67200 ']' 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:38.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:38.170 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.171 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.171 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.171 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:38.171 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MUs 00:12:38.171 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.171 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.431 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.431 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.MUs 00:12:38.431 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.MUs 00:12:38.431 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Moq ]] 00:12:38.431 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Moq 00:12:38.431 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.431 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.431 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.431 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Moq 00:12:38.431 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Moq 00:12:38.998 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:38.998 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xbx 00:12:38.998 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.998 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.998 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.998 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xbx 00:12:38.998 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xbx 00:12:39.257 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.iO5 ]] 00:12:39.257 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iO5 00:12:39.257 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.257 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.257 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.257 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iO5 00:12:39.257 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iO5 00:12:39.517 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:39.517 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xad 00:12:39.517 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.517 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.517 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.517 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xad 00:12:39.517 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xad 00:12:39.777 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.9bU ]] 00:12:39.777 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9bU 00:12:39.777 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.777 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.777 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.777 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9bU 00:12:39.777 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9bU 00:12:40.035 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:40.035 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EMm 00:12:40.035 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.035 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.035 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.035 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EMm 00:12:40.035 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EMm 00:12:40.294 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:40.294 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:40.294 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.294 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.294 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:40.294 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.552 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.811 00:12:40.811 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.811 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.811 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.070 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.070 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.070 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.070 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.070 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.070 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.070 { 00:12:41.070 "cntlid": 1, 00:12:41.070 "qid": 0, 00:12:41.070 "state": "enabled", 00:12:41.070 "thread": "nvmf_tgt_poll_group_000", 00:12:41.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:41.070 "listen_address": { 00:12:41.070 "trtype": "TCP", 00:12:41.070 "adrfam": "IPv4", 00:12:41.070 "traddr": "10.0.0.3", 00:12:41.070 "trsvcid": "4420" 00:12:41.070 }, 00:12:41.070 "peer_address": { 00:12:41.070 "trtype": "TCP", 00:12:41.070 "adrfam": "IPv4", 00:12:41.070 "traddr": "10.0.0.1", 00:12:41.070 "trsvcid": "51974" 00:12:41.070 }, 00:12:41.070 "auth": { 00:12:41.070 "state": "completed", 00:12:41.070 "digest": "sha256", 00:12:41.070 "dhgroup": "null" 00:12:41.070 } 00:12:41.070 } 00:12:41.070 ]' 00:12:41.070 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.329 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.329 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.329 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:41.329 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.329 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.329 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.329 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.589 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:12:41.589 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:45.833 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.092 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.093 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.661 00:12:46.661 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.661 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.661 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.925 { 00:12:46.925 "cntlid": 3, 00:12:46.925 "qid": 0, 00:12:46.925 "state": "enabled", 00:12:46.925 "thread": "nvmf_tgt_poll_group_000", 00:12:46.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:46.925 "listen_address": { 00:12:46.925 "trtype": "TCP", 00:12:46.925 "adrfam": "IPv4", 00:12:46.925 "traddr": "10.0.0.3", 00:12:46.925 "trsvcid": "4420" 00:12:46.925 }, 00:12:46.925 "peer_address": { 00:12:46.925 "trtype": "TCP", 00:12:46.925 "adrfam": "IPv4", 00:12:46.925 "traddr": "10.0.0.1", 00:12:46.925 "trsvcid": "52010" 00:12:46.925 }, 00:12:46.925 "auth": { 00:12:46.925 "state": "completed", 00:12:46.925 "digest": "sha256", 00:12:46.925 "dhgroup": "null" 00:12:46.925 } 00:12:46.925 } 00:12:46.925 ]' 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.925 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.185 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:12:47.185 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.123 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.383 00:12:48.643 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.643 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.643 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.902 { 00:12:48.902 "cntlid": 5, 00:12:48.902 "qid": 0, 00:12:48.902 "state": "enabled", 00:12:48.902 "thread": "nvmf_tgt_poll_group_000", 00:12:48.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:48.902 "listen_address": { 00:12:48.902 "trtype": "TCP", 00:12:48.902 "adrfam": "IPv4", 00:12:48.902 "traddr": "10.0.0.3", 00:12:48.902 "trsvcid": "4420" 00:12:48.902 }, 00:12:48.902 "peer_address": { 00:12:48.902 "trtype": "TCP", 00:12:48.902 "adrfam": "IPv4", 00:12:48.902 "traddr": "10.0.0.1", 00:12:48.902 "trsvcid": "52042" 00:12:48.902 }, 00:12:48.902 "auth": { 00:12:48.902 "state": "completed", 00:12:48.902 "digest": "sha256", 00:12:48.902 "dhgroup": "null" 00:12:48.902 } 00:12:48.902 } 00:12:48.902 ]' 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.902 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.903 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:48.903 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.903 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.903 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.903 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.161 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:12:49.161 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:50.098 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:50.357 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:50.357 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.358 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.616 00:12:50.616 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.616 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.616 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.893 { 00:12:50.893 "cntlid": 7, 00:12:50.893 "qid": 0, 00:12:50.893 "state": "enabled", 00:12:50.893 "thread": "nvmf_tgt_poll_group_000", 00:12:50.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:50.893 "listen_address": { 00:12:50.893 "trtype": "TCP", 00:12:50.893 "adrfam": "IPv4", 00:12:50.893 "traddr": "10.0.0.3", 00:12:50.893 "trsvcid": "4420" 00:12:50.893 }, 00:12:50.893 "peer_address": { 00:12:50.893 "trtype": "TCP", 00:12:50.893 "adrfam": "IPv4", 00:12:50.893 "traddr": "10.0.0.1", 00:12:50.893 "trsvcid": "54668" 00:12:50.893 }, 00:12:50.893 "auth": { 00:12:50.893 "state": "completed", 00:12:50.893 "digest": "sha256", 00:12:50.893 "dhgroup": "null" 00:12:50.893 } 00:12:50.893 } 00:12:50.893 ]' 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:50.893 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.160 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.161 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.161 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.419 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:12:51.419 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:51.987 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.246 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.246 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.246 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.246 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.246 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.813 00:12:52.813 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.813 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.813 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.072 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.072 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.072 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.072 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.072 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.072 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.072 { 00:12:53.072 "cntlid": 9, 00:12:53.072 "qid": 0, 00:12:53.072 "state": "enabled", 00:12:53.072 "thread": "nvmf_tgt_poll_group_000", 00:12:53.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:53.073 "listen_address": { 00:12:53.073 "trtype": "TCP", 00:12:53.073 "adrfam": "IPv4", 00:12:53.073 "traddr": "10.0.0.3", 00:12:53.073 "trsvcid": "4420" 00:12:53.073 }, 00:12:53.073 "peer_address": { 00:12:53.073 "trtype": "TCP", 00:12:53.073 "adrfam": "IPv4", 00:12:53.073 "traddr": "10.0.0.1", 00:12:53.073 "trsvcid": "54710" 00:12:53.073 }, 00:12:53.073 "auth": { 00:12:53.073 "state": "completed", 00:12:53.073 "digest": "sha256", 00:12:53.073 "dhgroup": "ffdhe2048" 00:12:53.073 } 00:12:53.073 } 00:12:53.073 ]' 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.073 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.332 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:12:53.332 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:54.268 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.527 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.786 00:12:54.786 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.786 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.786 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.045 { 00:12:55.045 "cntlid": 11, 00:12:55.045 "qid": 0, 00:12:55.045 "state": "enabled", 00:12:55.045 "thread": "nvmf_tgt_poll_group_000", 00:12:55.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:55.045 "listen_address": { 00:12:55.045 "trtype": "TCP", 00:12:55.045 "adrfam": "IPv4", 00:12:55.045 "traddr": "10.0.0.3", 00:12:55.045 "trsvcid": "4420" 00:12:55.045 }, 00:12:55.045 "peer_address": { 00:12:55.045 "trtype": "TCP", 00:12:55.045 "adrfam": "IPv4", 00:12:55.045 "traddr": "10.0.0.1", 00:12:55.045 "trsvcid": "54748" 00:12:55.045 }, 00:12:55.045 "auth": { 00:12:55.045 "state": "completed", 00:12:55.045 "digest": "sha256", 00:12:55.045 "dhgroup": "ffdhe2048" 00:12:55.045 } 00:12:55.045 } 00:12:55.045 ]' 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.045 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.345 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:55.345 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.345 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.345 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.345 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.604 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:12:55.604 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:56.171 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.737 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.995 00:12:56.995 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.995 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.995 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.254 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.254 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.254 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.254 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.254 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.254 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.254 { 00:12:57.254 "cntlid": 13, 00:12:57.254 "qid": 0, 00:12:57.254 "state": "enabled", 00:12:57.254 "thread": "nvmf_tgt_poll_group_000", 00:12:57.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:57.254 "listen_address": { 00:12:57.254 "trtype": "TCP", 00:12:57.254 "adrfam": "IPv4", 00:12:57.254 "traddr": "10.0.0.3", 00:12:57.254 "trsvcid": "4420" 00:12:57.254 }, 00:12:57.254 "peer_address": { 00:12:57.254 "trtype": "TCP", 00:12:57.254 "adrfam": "IPv4", 00:12:57.254 "traddr": "10.0.0.1", 00:12:57.254 "trsvcid": "54778" 00:12:57.254 }, 00:12:57.254 "auth": { 00:12:57.254 "state": "completed", 00:12:57.254 "digest": "sha256", 00:12:57.254 "dhgroup": "ffdhe2048" 00:12:57.254 } 00:12:57.254 } 00:12:57.254 ]' 00:12:57.254 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.254 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.254 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.513 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:57.513 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.513 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.513 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.513 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.772 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:12:57.772 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.708 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.969 00:12:59.228 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.228 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.228 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.487 { 00:12:59.487 "cntlid": 15, 00:12:59.487 "qid": 0, 00:12:59.487 "state": "enabled", 00:12:59.487 "thread": "nvmf_tgt_poll_group_000", 00:12:59.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:12:59.487 "listen_address": { 00:12:59.487 "trtype": "TCP", 00:12:59.487 "adrfam": "IPv4", 00:12:59.487 "traddr": "10.0.0.3", 00:12:59.487 "trsvcid": "4420" 00:12:59.487 }, 00:12:59.487 "peer_address": { 00:12:59.487 "trtype": "TCP", 00:12:59.487 "adrfam": "IPv4", 00:12:59.487 "traddr": "10.0.0.1", 00:12:59.487 "trsvcid": "53862" 00:12:59.487 }, 00:12:59.487 "auth": { 00:12:59.487 "state": "completed", 00:12:59.487 "digest": "sha256", 00:12:59.487 "dhgroup": "ffdhe2048" 00:12:59.487 } 00:12:59.487 } 00:12:59.487 ]' 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.487 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.747 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:12:59.747 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:00.315 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:00.575 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.834 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.094 00:13:01.094 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.094 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.094 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.663 { 00:13:01.663 "cntlid": 17, 00:13:01.663 "qid": 0, 00:13:01.663 "state": "enabled", 00:13:01.663 "thread": "nvmf_tgt_poll_group_000", 00:13:01.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:01.663 "listen_address": { 00:13:01.663 "trtype": "TCP", 00:13:01.663 "adrfam": "IPv4", 00:13:01.663 "traddr": "10.0.0.3", 00:13:01.663 "trsvcid": "4420" 00:13:01.663 }, 00:13:01.663 "peer_address": { 00:13:01.663 "trtype": "TCP", 00:13:01.663 "adrfam": "IPv4", 00:13:01.663 "traddr": "10.0.0.1", 00:13:01.663 "trsvcid": "53880" 00:13:01.663 }, 00:13:01.663 "auth": { 00:13:01.663 "state": "completed", 00:13:01.663 "digest": "sha256", 00:13:01.663 "dhgroup": "ffdhe3072" 00:13:01.663 } 00:13:01.663 } 00:13:01.663 ]' 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.663 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.922 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:01.922 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:02.489 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.748 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.007 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.007 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.007 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.007 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.266 00:13:03.266 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.266 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.266 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.525 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.525 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.525 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.525 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.525 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.525 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.525 { 00:13:03.525 "cntlid": 19, 00:13:03.525 "qid": 0, 00:13:03.525 "state": "enabled", 00:13:03.525 "thread": "nvmf_tgt_poll_group_000", 00:13:03.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:03.526 "listen_address": { 00:13:03.526 "trtype": "TCP", 00:13:03.526 "adrfam": "IPv4", 00:13:03.526 "traddr": "10.0.0.3", 00:13:03.526 "trsvcid": "4420" 00:13:03.526 }, 00:13:03.526 "peer_address": { 00:13:03.526 "trtype": "TCP", 00:13:03.526 "adrfam": "IPv4", 00:13:03.526 "traddr": "10.0.0.1", 00:13:03.526 "trsvcid": "53918" 00:13:03.526 }, 00:13:03.526 "auth": { 00:13:03.526 "state": "completed", 00:13:03.526 "digest": "sha256", 00:13:03.526 "dhgroup": "ffdhe3072" 00:13:03.526 } 00:13:03.526 } 00:13:03.526 ]' 00:13:03.526 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.526 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.526 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.526 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:03.526 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.784 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.784 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.784 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.043 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:04.043 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:04.611 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.871 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.439 00:13:05.439 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.439 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.439 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.698 { 00:13:05.698 "cntlid": 21, 00:13:05.698 "qid": 0, 00:13:05.698 "state": "enabled", 00:13:05.698 "thread": "nvmf_tgt_poll_group_000", 00:13:05.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:05.698 "listen_address": { 00:13:05.698 "trtype": "TCP", 00:13:05.698 "adrfam": "IPv4", 00:13:05.698 "traddr": "10.0.0.3", 00:13:05.698 "trsvcid": "4420" 00:13:05.698 }, 00:13:05.698 "peer_address": { 00:13:05.698 "trtype": "TCP", 00:13:05.698 "adrfam": "IPv4", 00:13:05.698 "traddr": "10.0.0.1", 00:13:05.698 "trsvcid": "53944" 00:13:05.698 }, 00:13:05.698 "auth": { 00:13:05.698 "state": "completed", 00:13:05.698 "digest": "sha256", 00:13:05.698 "dhgroup": "ffdhe3072" 00:13:05.698 } 00:13:05.698 } 00:13:05.698 ]' 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.698 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.956 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:05.956 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:06.893 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:06.894 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.461 00:13:07.461 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.461 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.461 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.721 { 00:13:07.721 "cntlid": 23, 00:13:07.721 "qid": 0, 00:13:07.721 "state": "enabled", 00:13:07.721 "thread": "nvmf_tgt_poll_group_000", 00:13:07.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:07.721 "listen_address": { 00:13:07.721 "trtype": "TCP", 00:13:07.721 "adrfam": "IPv4", 00:13:07.721 "traddr": "10.0.0.3", 00:13:07.721 "trsvcid": "4420" 00:13:07.721 }, 00:13:07.721 "peer_address": { 00:13:07.721 "trtype": "TCP", 00:13:07.721 "adrfam": "IPv4", 00:13:07.721 "traddr": "10.0.0.1", 00:13:07.721 "trsvcid": "53952" 00:13:07.721 }, 00:13:07.721 "auth": { 00:13:07.721 "state": "completed", 00:13:07.721 "digest": "sha256", 00:13:07.721 "dhgroup": "ffdhe3072" 00:13:07.721 } 00:13:07.721 } 00:13:07.721 ]' 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.721 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.982 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:07.982 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.920 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.921 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.179 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.179 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.179 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.180 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.438 00:13:09.438 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.438 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.438 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.696 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.696 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.696 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.696 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.696 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.696 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.696 { 00:13:09.696 "cntlid": 25, 00:13:09.696 "qid": 0, 00:13:09.696 "state": "enabled", 00:13:09.696 "thread": "nvmf_tgt_poll_group_000", 00:13:09.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:09.696 "listen_address": { 00:13:09.696 "trtype": "TCP", 00:13:09.696 "adrfam": "IPv4", 00:13:09.696 "traddr": "10.0.0.3", 00:13:09.696 "trsvcid": "4420" 00:13:09.696 }, 00:13:09.696 "peer_address": { 00:13:09.696 "trtype": "TCP", 00:13:09.696 "adrfam": "IPv4", 00:13:09.696 "traddr": "10.0.0.1", 00:13:09.696 "trsvcid": "38122" 00:13:09.697 }, 00:13:09.697 "auth": { 00:13:09.697 "state": "completed", 00:13:09.697 "digest": "sha256", 00:13:09.697 "dhgroup": "ffdhe4096" 00:13:09.697 } 00:13:09.697 } 00:13:09.697 ]' 00:13:09.697 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.697 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.697 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.697 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:09.697 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.955 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.955 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.955 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.214 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:10.214 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:10.782 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.349 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.608 00:13:11.608 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.608 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.608 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.867 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.867 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.867 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.868 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.868 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.868 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.868 { 00:13:11.868 "cntlid": 27, 00:13:11.868 "qid": 0, 00:13:11.868 "state": "enabled", 00:13:11.868 "thread": "nvmf_tgt_poll_group_000", 00:13:11.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:11.868 "listen_address": { 00:13:11.868 "trtype": "TCP", 00:13:11.868 "adrfam": "IPv4", 00:13:11.868 "traddr": "10.0.0.3", 00:13:11.868 "trsvcid": "4420" 00:13:11.868 }, 00:13:11.868 "peer_address": { 00:13:11.868 "trtype": "TCP", 00:13:11.868 "adrfam": "IPv4", 00:13:11.868 "traddr": "10.0.0.1", 00:13:11.868 "trsvcid": "38148" 00:13:11.868 }, 00:13:11.868 "auth": { 00:13:11.868 "state": "completed", 00:13:11.868 "digest": "sha256", 00:13:11.868 "dhgroup": "ffdhe4096" 00:13:11.868 } 00:13:11.868 } 00:13:11.868 ]' 00:13:11.868 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.868 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.868 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.154 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:12.154 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.154 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.154 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.154 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.413 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:12.413 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:12.980 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.981 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:12.981 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.981 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.981 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.981 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.981 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:12.981 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.240 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.807 00:13:13.807 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.807 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.807 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.066 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.066 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.066 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.066 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.067 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.067 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.067 { 00:13:14.067 "cntlid": 29, 00:13:14.067 "qid": 0, 00:13:14.067 "state": "enabled", 00:13:14.067 "thread": "nvmf_tgt_poll_group_000", 00:13:14.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:14.067 "listen_address": { 00:13:14.067 "trtype": "TCP", 00:13:14.067 "adrfam": "IPv4", 00:13:14.067 "traddr": "10.0.0.3", 00:13:14.067 "trsvcid": "4420" 00:13:14.067 }, 00:13:14.067 "peer_address": { 00:13:14.067 "trtype": "TCP", 00:13:14.067 "adrfam": "IPv4", 00:13:14.067 "traddr": "10.0.0.1", 00:13:14.067 "trsvcid": "38174" 00:13:14.067 }, 00:13:14.067 "auth": { 00:13:14.067 "state": "completed", 00:13:14.067 "digest": "sha256", 00:13:14.067 "dhgroup": "ffdhe4096" 00:13:14.067 } 00:13:14.067 } 00:13:14.067 ]' 00:13:14.067 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.067 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:14.067 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.067 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:14.067 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.326 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.326 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.326 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.585 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:14.585 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:15.153 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.412 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.670 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.670 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:15.670 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.670 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.929 00:13:15.929 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.929 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.929 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.189 { 00:13:16.189 "cntlid": 31, 00:13:16.189 "qid": 0, 00:13:16.189 "state": "enabled", 00:13:16.189 "thread": "nvmf_tgt_poll_group_000", 00:13:16.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:16.189 "listen_address": { 00:13:16.189 "trtype": "TCP", 00:13:16.189 "adrfam": "IPv4", 00:13:16.189 "traddr": "10.0.0.3", 00:13:16.189 "trsvcid": "4420" 00:13:16.189 }, 00:13:16.189 "peer_address": { 00:13:16.189 "trtype": "TCP", 00:13:16.189 "adrfam": "IPv4", 00:13:16.189 "traddr": "10.0.0.1", 00:13:16.189 "trsvcid": "38196" 00:13:16.189 }, 00:13:16.189 "auth": { 00:13:16.189 "state": "completed", 00:13:16.189 "digest": "sha256", 00:13:16.189 "dhgroup": "ffdhe4096" 00:13:16.189 } 00:13:16.189 } 00:13:16.189 ]' 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:16.189 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.447 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.447 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.447 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.706 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:16.706 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:17.274 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.533 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.100 00:13:18.100 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.100 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.100 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.359 { 00:13:18.359 "cntlid": 33, 00:13:18.359 "qid": 0, 00:13:18.359 "state": "enabled", 00:13:18.359 "thread": "nvmf_tgt_poll_group_000", 00:13:18.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:18.359 "listen_address": { 00:13:18.359 "trtype": "TCP", 00:13:18.359 "adrfam": "IPv4", 00:13:18.359 "traddr": "10.0.0.3", 00:13:18.359 "trsvcid": "4420" 00:13:18.359 }, 00:13:18.359 "peer_address": { 00:13:18.359 "trtype": "TCP", 00:13:18.359 "adrfam": "IPv4", 00:13:18.359 "traddr": "10.0.0.1", 00:13:18.359 "trsvcid": "38216" 00:13:18.359 }, 00:13:18.359 "auth": { 00:13:18.359 "state": "completed", 00:13:18.359 "digest": "sha256", 00:13:18.359 "dhgroup": "ffdhe6144" 00:13:18.359 } 00:13:18.359 } 00:13:18.359 ]' 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.359 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.618 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.618 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.618 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.877 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:18.877 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:19.444 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.444 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:19.444 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.445 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.445 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.445 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.445 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:19.445 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.704 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.273 00:13:20.273 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.273 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.273 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.531 { 00:13:20.531 "cntlid": 35, 00:13:20.531 "qid": 0, 00:13:20.531 "state": "enabled", 00:13:20.531 "thread": "nvmf_tgt_poll_group_000", 00:13:20.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:20.531 "listen_address": { 00:13:20.531 "trtype": "TCP", 00:13:20.531 "adrfam": "IPv4", 00:13:20.531 "traddr": "10.0.0.3", 00:13:20.531 "trsvcid": "4420" 00:13:20.531 }, 00:13:20.531 "peer_address": { 00:13:20.531 "trtype": "TCP", 00:13:20.531 "adrfam": "IPv4", 00:13:20.531 "traddr": "10.0.0.1", 00:13:20.531 "trsvcid": "42420" 00:13:20.531 }, 00:13:20.531 "auth": { 00:13:20.531 "state": "completed", 00:13:20.531 "digest": "sha256", 00:13:20.531 "dhgroup": "ffdhe6144" 00:13:20.531 } 00:13:20.531 } 00:13:20.531 ]' 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.531 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.790 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:20.790 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.790 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.790 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.790 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.049 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:21.049 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:21.617 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.876 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.445 00:13:22.445 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.445 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.445 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.012 { 00:13:23.012 "cntlid": 37, 00:13:23.012 "qid": 0, 00:13:23.012 "state": "enabled", 00:13:23.012 "thread": "nvmf_tgt_poll_group_000", 00:13:23.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:23.012 "listen_address": { 00:13:23.012 "trtype": "TCP", 00:13:23.012 "adrfam": "IPv4", 00:13:23.012 "traddr": "10.0.0.3", 00:13:23.012 "trsvcid": "4420" 00:13:23.012 }, 00:13:23.012 "peer_address": { 00:13:23.012 "trtype": "TCP", 00:13:23.012 "adrfam": "IPv4", 00:13:23.012 "traddr": "10.0.0.1", 00:13:23.012 "trsvcid": "42436" 00:13:23.012 }, 00:13:23.012 "auth": { 00:13:23.012 "state": "completed", 00:13:23.012 "digest": "sha256", 00:13:23.012 "dhgroup": "ffdhe6144" 00:13:23.012 } 00:13:23.012 } 00:13:23.012 ]' 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.012 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.271 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:23.271 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.207 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.776 00:13:24.776 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.776 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.776 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.035 { 00:13:25.035 "cntlid": 39, 00:13:25.035 "qid": 0, 00:13:25.035 "state": "enabled", 00:13:25.035 "thread": "nvmf_tgt_poll_group_000", 00:13:25.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:25.035 "listen_address": { 00:13:25.035 "trtype": "TCP", 00:13:25.035 "adrfam": "IPv4", 00:13:25.035 "traddr": "10.0.0.3", 00:13:25.035 "trsvcid": "4420" 00:13:25.035 }, 00:13:25.035 "peer_address": { 00:13:25.035 "trtype": "TCP", 00:13:25.035 "adrfam": "IPv4", 00:13:25.035 "traddr": "10.0.0.1", 00:13:25.035 "trsvcid": "42462" 00:13:25.035 }, 00:13:25.035 "auth": { 00:13:25.035 "state": "completed", 00:13:25.035 "digest": "sha256", 00:13:25.035 "dhgroup": "ffdhe6144" 00:13:25.035 } 00:13:25.035 } 00:13:25.035 ]' 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.035 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.294 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:25.294 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.294 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.294 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.294 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.552 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:25.552 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.119 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.686 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.687 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.254 00:13:27.254 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.254 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.254 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.513 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.514 { 00:13:27.514 "cntlid": 41, 00:13:27.514 "qid": 0, 00:13:27.514 "state": "enabled", 00:13:27.514 "thread": "nvmf_tgt_poll_group_000", 00:13:27.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:27.514 "listen_address": { 00:13:27.514 "trtype": "TCP", 00:13:27.514 "adrfam": "IPv4", 00:13:27.514 "traddr": "10.0.0.3", 00:13:27.514 "trsvcid": "4420" 00:13:27.514 }, 00:13:27.514 "peer_address": { 00:13:27.514 "trtype": "TCP", 00:13:27.514 "adrfam": "IPv4", 00:13:27.514 "traddr": "10.0.0.1", 00:13:27.514 "trsvcid": "42486" 00:13:27.514 }, 00:13:27.514 "auth": { 00:13:27.514 "state": "completed", 00:13:27.514 "digest": "sha256", 00:13:27.514 "dhgroup": "ffdhe8192" 00:13:27.514 } 00:13:27.514 } 00:13:27.514 ]' 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:27.514 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.773 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.773 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.773 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.032 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:28.032 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.600 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.859 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.795 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.795 { 00:13:29.795 "cntlid": 43, 00:13:29.795 "qid": 0, 00:13:29.795 "state": "enabled", 00:13:29.795 "thread": "nvmf_tgt_poll_group_000", 00:13:29.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:29.795 "listen_address": { 00:13:29.795 "trtype": "TCP", 00:13:29.795 "adrfam": "IPv4", 00:13:29.795 "traddr": "10.0.0.3", 00:13:29.795 "trsvcid": "4420" 00:13:29.795 }, 00:13:29.795 "peer_address": { 00:13:29.795 "trtype": "TCP", 00:13:29.795 "adrfam": "IPv4", 00:13:29.795 "traddr": "10.0.0.1", 00:13:29.795 "trsvcid": "44084" 00:13:29.795 }, 00:13:29.795 "auth": { 00:13:29.795 "state": "completed", 00:13:29.795 "digest": "sha256", 00:13:29.795 "dhgroup": "ffdhe8192" 00:13:29.795 } 00:13:29.795 } 00:13:29.795 ]' 00:13:29.795 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.054 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.054 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.054 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:30.054 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.054 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.054 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.054 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.313 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:30.313 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:31.249 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.507 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.508 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.508 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.508 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.075 00:13:32.075 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.075 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.075 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.334 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.334 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.334 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.334 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.334 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.334 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.334 { 00:13:32.334 "cntlid": 45, 00:13:32.334 "qid": 0, 00:13:32.334 "state": "enabled", 00:13:32.334 "thread": "nvmf_tgt_poll_group_000", 00:13:32.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:32.334 "listen_address": { 00:13:32.334 "trtype": "TCP", 00:13:32.334 "adrfam": "IPv4", 00:13:32.334 "traddr": "10.0.0.3", 00:13:32.334 "trsvcid": "4420" 00:13:32.334 }, 00:13:32.334 "peer_address": { 00:13:32.334 "trtype": "TCP", 00:13:32.334 "adrfam": "IPv4", 00:13:32.334 "traddr": "10.0.0.1", 00:13:32.334 "trsvcid": "44104" 00:13:32.334 }, 00:13:32.334 "auth": { 00:13:32.334 "state": "completed", 00:13:32.334 "digest": "sha256", 00:13:32.334 "dhgroup": "ffdhe8192" 00:13:32.334 } 00:13:32.334 } 00:13:32.334 ]' 00:13:32.334 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.334 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.334 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.334 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:32.334 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.334 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.334 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.334 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.901 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:32.901 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:33.472 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.736 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.672 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.672 { 00:13:34.672 "cntlid": 47, 00:13:34.672 "qid": 0, 00:13:34.672 "state": "enabled", 00:13:34.672 "thread": "nvmf_tgt_poll_group_000", 00:13:34.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:34.672 "listen_address": { 00:13:34.672 "trtype": "TCP", 00:13:34.672 "adrfam": "IPv4", 00:13:34.672 "traddr": "10.0.0.3", 00:13:34.672 "trsvcid": "4420" 00:13:34.672 }, 00:13:34.672 "peer_address": { 00:13:34.672 "trtype": "TCP", 00:13:34.672 "adrfam": "IPv4", 00:13:34.672 "traddr": "10.0.0.1", 00:13:34.672 "trsvcid": "44134" 00:13:34.672 }, 00:13:34.672 "auth": { 00:13:34.672 "state": "completed", 00:13:34.672 "digest": "sha256", 00:13:34.672 "dhgroup": "ffdhe8192" 00:13:34.672 } 00:13:34.672 } 00:13:34.672 ]' 00:13:34.672 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.931 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.931 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.931 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.931 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.931 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.931 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.931 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.190 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:35.191 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:35.758 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.018 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.586 00:13:36.586 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.586 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.586 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.844 { 00:13:36.844 "cntlid": 49, 00:13:36.844 "qid": 0, 00:13:36.844 "state": "enabled", 00:13:36.844 "thread": "nvmf_tgt_poll_group_000", 00:13:36.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:36.844 "listen_address": { 00:13:36.844 "trtype": "TCP", 00:13:36.844 "adrfam": "IPv4", 00:13:36.844 "traddr": "10.0.0.3", 00:13:36.844 "trsvcid": "4420" 00:13:36.844 }, 00:13:36.844 "peer_address": { 00:13:36.844 "trtype": "TCP", 00:13:36.844 "adrfam": "IPv4", 00:13:36.844 "traddr": "10.0.0.1", 00:13:36.844 "trsvcid": "44160" 00:13:36.844 }, 00:13:36.844 "auth": { 00:13:36.844 "state": "completed", 00:13:36.844 "digest": "sha384", 00:13:36.844 "dhgroup": "null" 00:13:36.844 } 00:13:36.844 } 00:13:36.844 ]' 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.844 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.103 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:37.103 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.103 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.103 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.103 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.362 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:37.362 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:37.929 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.929 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:37.929 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.929 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.187 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.187 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.187 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.187 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.446 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.704 00:13:38.704 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.704 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.704 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.963 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.963 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.963 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.963 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.963 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.963 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.963 { 00:13:38.963 "cntlid": 51, 00:13:38.963 "qid": 0, 00:13:38.963 "state": "enabled", 00:13:38.963 "thread": "nvmf_tgt_poll_group_000", 00:13:38.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:38.963 "listen_address": { 00:13:38.963 "trtype": "TCP", 00:13:38.963 "adrfam": "IPv4", 00:13:38.963 "traddr": "10.0.0.3", 00:13:38.963 "trsvcid": "4420" 00:13:38.963 }, 00:13:38.963 "peer_address": { 00:13:38.963 "trtype": "TCP", 00:13:38.963 "adrfam": "IPv4", 00:13:38.963 "traddr": "10.0.0.1", 00:13:38.963 "trsvcid": "44200" 00:13:38.963 }, 00:13:38.963 "auth": { 00:13:38.963 "state": "completed", 00:13:38.963 "digest": "sha384", 00:13:38.963 "dhgroup": "null" 00:13:38.963 } 00:13:38.963 } 00:13:38.963 ]' 00:13:38.963 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.221 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.221 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.221 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:39.221 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.221 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.221 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.221 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.480 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:39.480 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:40.415 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:40.674 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:40.674 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.674 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:40.674 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:40.674 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.675 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.933 00:13:40.933 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.933 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.933 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.192 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.192 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.192 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.192 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.192 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.192 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.192 { 00:13:41.192 "cntlid": 53, 00:13:41.192 "qid": 0, 00:13:41.192 "state": "enabled", 00:13:41.192 "thread": "nvmf_tgt_poll_group_000", 00:13:41.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:41.192 "listen_address": { 00:13:41.192 "trtype": "TCP", 00:13:41.192 "adrfam": "IPv4", 00:13:41.192 "traddr": "10.0.0.3", 00:13:41.192 "trsvcid": "4420" 00:13:41.192 }, 00:13:41.192 "peer_address": { 00:13:41.193 "trtype": "TCP", 00:13:41.193 "adrfam": "IPv4", 00:13:41.193 "traddr": "10.0.0.1", 00:13:41.193 "trsvcid": "51160" 00:13:41.193 }, 00:13:41.193 "auth": { 00:13:41.193 "state": "completed", 00:13:41.193 "digest": "sha384", 00:13:41.193 "dhgroup": "null" 00:13:41.193 } 00:13:41.193 } 00:13:41.193 ]' 00:13:41.193 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.193 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.451 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.451 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:41.451 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.451 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.451 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.451 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.710 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:41.710 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:42.277 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.277 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:42.277 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.277 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.277 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.277 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.277 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:42.277 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:13:42.536 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.537 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.537 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.537 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.537 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.537 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.104 00:13:43.104 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.104 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.104 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.363 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.363 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.363 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.363 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.363 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.363 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.363 { 00:13:43.363 "cntlid": 55, 00:13:43.363 "qid": 0, 00:13:43.363 "state": "enabled", 00:13:43.363 "thread": "nvmf_tgt_poll_group_000", 00:13:43.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:43.363 "listen_address": { 00:13:43.363 "trtype": "TCP", 00:13:43.363 "adrfam": "IPv4", 00:13:43.363 "traddr": "10.0.0.3", 00:13:43.363 "trsvcid": "4420" 00:13:43.363 }, 00:13:43.363 "peer_address": { 00:13:43.363 "trtype": "TCP", 00:13:43.363 "adrfam": "IPv4", 00:13:43.363 "traddr": "10.0.0.1", 00:13:43.363 "trsvcid": "51178" 00:13:43.363 }, 00:13:43.363 "auth": { 00:13:43.363 "state": "completed", 00:13:43.363 "digest": "sha384", 00:13:43.363 "dhgroup": "null" 00:13:43.363 } 00:13:43.363 } 00:13:43.363 ]' 00:13:43.363 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.363 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.363 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.363 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:43.363 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.363 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.363 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.363 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.621 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:43.621 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.583 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.841 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.099 00:13:45.099 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.099 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.099 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.357 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.357 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.357 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.357 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.357 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.357 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.357 { 00:13:45.358 "cntlid": 57, 00:13:45.358 "qid": 0, 00:13:45.358 "state": "enabled", 00:13:45.358 "thread": "nvmf_tgt_poll_group_000", 00:13:45.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:45.358 "listen_address": { 00:13:45.358 "trtype": "TCP", 00:13:45.358 "adrfam": "IPv4", 00:13:45.358 "traddr": "10.0.0.3", 00:13:45.358 "trsvcid": "4420" 00:13:45.358 }, 00:13:45.358 "peer_address": { 00:13:45.358 "trtype": "TCP", 00:13:45.358 "adrfam": "IPv4", 00:13:45.358 "traddr": "10.0.0.1", 00:13:45.358 "trsvcid": "51194" 00:13:45.358 }, 00:13:45.358 "auth": { 00:13:45.358 "state": "completed", 00:13:45.358 "digest": "sha384", 00:13:45.358 "dhgroup": "ffdhe2048" 00:13:45.358 } 00:13:45.358 } 00:13:45.358 ]' 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.616 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.874 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:45.874 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.810 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.068 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.326 00:13:47.326 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.326 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.326 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.585 { 00:13:47.585 "cntlid": 59, 00:13:47.585 "qid": 0, 00:13:47.585 "state": "enabled", 00:13:47.585 "thread": "nvmf_tgt_poll_group_000", 00:13:47.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:47.585 "listen_address": { 00:13:47.585 "trtype": "TCP", 00:13:47.585 "adrfam": "IPv4", 00:13:47.585 "traddr": "10.0.0.3", 00:13:47.585 "trsvcid": "4420" 00:13:47.585 }, 00:13:47.585 "peer_address": { 00:13:47.585 "trtype": "TCP", 00:13:47.585 "adrfam": "IPv4", 00:13:47.585 "traddr": "10.0.0.1", 00:13:47.585 "trsvcid": "51214" 00:13:47.585 }, 00:13:47.585 "auth": { 00:13:47.585 "state": "completed", 00:13:47.585 "digest": "sha384", 00:13:47.585 "dhgroup": "ffdhe2048" 00:13:47.585 } 00:13:47.585 } 00:13:47.585 ]' 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.585 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.843 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:47.843 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.843 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.843 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.843 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.102 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:48.102 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.035 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.600 00:13:49.600 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.600 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.600 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.859 { 00:13:49.859 "cntlid": 61, 00:13:49.859 "qid": 0, 00:13:49.859 "state": "enabled", 00:13:49.859 "thread": "nvmf_tgt_poll_group_000", 00:13:49.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:49.859 "listen_address": { 00:13:49.859 "trtype": "TCP", 00:13:49.859 "adrfam": "IPv4", 00:13:49.859 "traddr": "10.0.0.3", 00:13:49.859 "trsvcid": "4420" 00:13:49.859 }, 00:13:49.859 "peer_address": { 00:13:49.859 "trtype": "TCP", 00:13:49.859 "adrfam": "IPv4", 00:13:49.859 "traddr": "10.0.0.1", 00:13:49.859 "trsvcid": "42708" 00:13:49.859 }, 00:13:49.859 "auth": { 00:13:49.859 "state": "completed", 00:13:49.859 "digest": "sha384", 00:13:49.859 "dhgroup": "ffdhe2048" 00:13:49.859 } 00:13:49.859 } 00:13:49.859 ]' 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.859 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.117 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:50.117 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.058 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.640 00:13:51.640 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.640 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.640 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.899 { 00:13:51.899 "cntlid": 63, 00:13:51.899 "qid": 0, 00:13:51.899 "state": "enabled", 00:13:51.899 "thread": "nvmf_tgt_poll_group_000", 00:13:51.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:51.899 "listen_address": { 00:13:51.899 "trtype": "TCP", 00:13:51.899 "adrfam": "IPv4", 00:13:51.899 "traddr": "10.0.0.3", 00:13:51.899 "trsvcid": "4420" 00:13:51.899 }, 00:13:51.899 "peer_address": { 00:13:51.899 "trtype": "TCP", 00:13:51.899 "adrfam": "IPv4", 00:13:51.899 "traddr": "10.0.0.1", 00:13:51.899 "trsvcid": "42740" 00:13:51.899 }, 00:13:51.899 "auth": { 00:13:51.899 "state": "completed", 00:13:51.899 "digest": "sha384", 00:13:51.899 "dhgroup": "ffdhe2048" 00:13:51.899 } 00:13:51.899 } 00:13:51.899 ]' 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.899 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.157 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:52.157 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:53.091 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.350 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.609 00:13:53.609 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.609 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.609 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.868 { 00:13:53.868 "cntlid": 65, 00:13:53.868 "qid": 0, 00:13:53.868 "state": "enabled", 00:13:53.868 "thread": "nvmf_tgt_poll_group_000", 00:13:53.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:53.868 "listen_address": { 00:13:53.868 "trtype": "TCP", 00:13:53.868 "adrfam": "IPv4", 00:13:53.868 "traddr": "10.0.0.3", 00:13:53.868 "trsvcid": "4420" 00:13:53.868 }, 00:13:53.868 "peer_address": { 00:13:53.868 "trtype": "TCP", 00:13:53.868 "adrfam": "IPv4", 00:13:53.868 "traddr": "10.0.0.1", 00:13:53.868 "trsvcid": "42764" 00:13:53.868 }, 00:13:53.868 "auth": { 00:13:53.868 "state": "completed", 00:13:53.868 "digest": "sha384", 00:13:53.868 "dhgroup": "ffdhe3072" 00:13:53.868 } 00:13:53.868 } 00:13:53.868 ]' 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.868 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.126 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:54.126 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.126 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.127 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.127 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.385 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:54.385 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:55.321 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:55.321 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:55.321 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.321 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:55.321 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:55.321 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:55.321 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.322 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.322 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.322 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.322 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.322 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.322 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.322 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.889 00:13:55.889 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.889 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.889 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.147 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.148 { 00:13:56.148 "cntlid": 67, 00:13:56.148 "qid": 0, 00:13:56.148 "state": "enabled", 00:13:56.148 "thread": "nvmf_tgt_poll_group_000", 00:13:56.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:56.148 "listen_address": { 00:13:56.148 "trtype": "TCP", 00:13:56.148 "adrfam": "IPv4", 00:13:56.148 "traddr": "10.0.0.3", 00:13:56.148 "trsvcid": "4420" 00:13:56.148 }, 00:13:56.148 "peer_address": { 00:13:56.148 "trtype": "TCP", 00:13:56.148 "adrfam": "IPv4", 00:13:56.148 "traddr": "10.0.0.1", 00:13:56.148 "trsvcid": "42790" 00:13:56.148 }, 00:13:56.148 "auth": { 00:13:56.148 "state": "completed", 00:13:56.148 "digest": "sha384", 00:13:56.148 "dhgroup": "ffdhe3072" 00:13:56.148 } 00:13:56.148 } 00:13:56.148 ]' 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.148 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.407 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:56.407 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.356 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.356 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.922 00:13:57.922 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.922 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.922 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.180 { 00:13:58.180 "cntlid": 69, 00:13:58.180 "qid": 0, 00:13:58.180 "state": "enabled", 00:13:58.180 "thread": "nvmf_tgt_poll_group_000", 00:13:58.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:13:58.180 "listen_address": { 00:13:58.180 "trtype": "TCP", 00:13:58.180 "adrfam": "IPv4", 00:13:58.180 "traddr": "10.0.0.3", 00:13:58.180 "trsvcid": "4420" 00:13:58.180 }, 00:13:58.180 "peer_address": { 00:13:58.180 "trtype": "TCP", 00:13:58.180 "adrfam": "IPv4", 00:13:58.180 "traddr": "10.0.0.1", 00:13:58.180 "trsvcid": "42814" 00:13:58.180 }, 00:13:58.180 "auth": { 00:13:58.180 "state": "completed", 00:13:58.180 "digest": "sha384", 00:13:58.180 "dhgroup": "ffdhe3072" 00:13:58.180 } 00:13:58.180 } 00:13:58.180 ]' 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.180 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.438 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:58.438 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:13:59.004 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.004 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:13:59.004 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.004 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.262 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.262 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.262 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:59.262 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.520 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.778 00:13:59.778 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.778 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.778 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.036 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.036 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.036 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.036 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.036 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.036 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.036 { 00:14:00.036 "cntlid": 71, 00:14:00.036 "qid": 0, 00:14:00.036 "state": "enabled", 00:14:00.036 "thread": "nvmf_tgt_poll_group_000", 00:14:00.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:00.037 "listen_address": { 00:14:00.037 "trtype": "TCP", 00:14:00.037 "adrfam": "IPv4", 00:14:00.037 "traddr": "10.0.0.3", 00:14:00.037 "trsvcid": "4420" 00:14:00.037 }, 00:14:00.037 "peer_address": { 00:14:00.037 "trtype": "TCP", 00:14:00.037 "adrfam": "IPv4", 00:14:00.037 "traddr": "10.0.0.1", 00:14:00.037 "trsvcid": "37696" 00:14:00.037 }, 00:14:00.037 "auth": { 00:14:00.037 "state": "completed", 00:14:00.037 "digest": "sha384", 00:14:00.037 "dhgroup": "ffdhe3072" 00:14:00.037 } 00:14:00.037 } 00:14:00.037 ]' 00:14:00.037 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.037 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.037 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.037 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:00.037 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.295 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.295 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.295 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.553 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:00.553 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.119 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.396 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.666 00:14:01.666 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.666 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.666 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.924 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.924 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.924 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.924 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.924 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.924 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.924 { 00:14:01.924 "cntlid": 73, 00:14:01.924 "qid": 0, 00:14:01.924 "state": "enabled", 00:14:01.924 "thread": "nvmf_tgt_poll_group_000", 00:14:01.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:01.924 "listen_address": { 00:14:01.924 "trtype": "TCP", 00:14:01.924 "adrfam": "IPv4", 00:14:01.924 "traddr": "10.0.0.3", 00:14:01.924 "trsvcid": "4420" 00:14:01.924 }, 00:14:01.924 "peer_address": { 00:14:01.924 "trtype": "TCP", 00:14:01.924 "adrfam": "IPv4", 00:14:01.924 "traddr": "10.0.0.1", 00:14:01.924 "trsvcid": "37724" 00:14:01.924 }, 00:14:01.924 "auth": { 00:14:01.924 "state": "completed", 00:14:01.924 "digest": "sha384", 00:14:01.924 "dhgroup": "ffdhe4096" 00:14:01.924 } 00:14:01.924 } 00:14:01.924 ]' 00:14:01.924 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.182 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.182 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.182 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:02.182 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.182 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.182 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.182 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.441 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:02.441 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.376 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.376 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:03.376 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.376 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.376 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:03.376 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.377 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.944 00:14:03.944 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.944 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.944 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.203 { 00:14:04.203 "cntlid": 75, 00:14:04.203 "qid": 0, 00:14:04.203 "state": "enabled", 00:14:04.203 "thread": "nvmf_tgt_poll_group_000", 00:14:04.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:04.203 "listen_address": { 00:14:04.203 "trtype": "TCP", 00:14:04.203 "adrfam": "IPv4", 00:14:04.203 "traddr": "10.0.0.3", 00:14:04.203 "trsvcid": "4420" 00:14:04.203 }, 00:14:04.203 "peer_address": { 00:14:04.203 "trtype": "TCP", 00:14:04.203 "adrfam": "IPv4", 00:14:04.203 "traddr": "10.0.0.1", 00:14:04.203 "trsvcid": "37744" 00:14:04.203 }, 00:14:04.203 "auth": { 00:14:04.203 "state": "completed", 00:14:04.203 "digest": "sha384", 00:14:04.203 "dhgroup": "ffdhe4096" 00:14:04.203 } 00:14:04.203 } 00:14:04.203 ]' 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:04.203 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.203 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.203 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.203 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.770 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:04.770 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.339 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.597 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.598 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.598 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.598 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.164 00:14:06.164 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.164 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.164 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.423 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.423 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.423 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.423 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.423 { 00:14:06.423 "cntlid": 77, 00:14:06.423 "qid": 0, 00:14:06.423 "state": "enabled", 00:14:06.423 "thread": "nvmf_tgt_poll_group_000", 00:14:06.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:06.423 "listen_address": { 00:14:06.423 "trtype": "TCP", 00:14:06.423 "adrfam": "IPv4", 00:14:06.423 "traddr": "10.0.0.3", 00:14:06.423 "trsvcid": "4420" 00:14:06.423 }, 00:14:06.423 "peer_address": { 00:14:06.423 "trtype": "TCP", 00:14:06.423 "adrfam": "IPv4", 00:14:06.423 "traddr": "10.0.0.1", 00:14:06.423 "trsvcid": "37770" 00:14:06.423 }, 00:14:06.423 "auth": { 00:14:06.423 "state": "completed", 00:14:06.423 "digest": "sha384", 00:14:06.423 "dhgroup": "ffdhe4096" 00:14:06.423 } 00:14:06.423 } 00:14:06.423 ]' 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.423 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.705 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:06.705 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.642 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.210 00:14:08.210 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.210 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.210 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.469 { 00:14:08.469 "cntlid": 79, 00:14:08.469 "qid": 0, 00:14:08.469 "state": "enabled", 00:14:08.469 "thread": "nvmf_tgt_poll_group_000", 00:14:08.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:08.469 "listen_address": { 00:14:08.469 "trtype": "TCP", 00:14:08.469 "adrfam": "IPv4", 00:14:08.469 "traddr": "10.0.0.3", 00:14:08.469 "trsvcid": "4420" 00:14:08.469 }, 00:14:08.469 "peer_address": { 00:14:08.469 "trtype": "TCP", 00:14:08.469 "adrfam": "IPv4", 00:14:08.469 "traddr": "10.0.0.1", 00:14:08.469 "trsvcid": "37794" 00:14:08.469 }, 00:14:08.469 "auth": { 00:14:08.469 "state": "completed", 00:14:08.469 "digest": "sha384", 00:14:08.469 "dhgroup": "ffdhe4096" 00:14:08.469 } 00:14:08.469 } 00:14:08.469 ]' 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:08.469 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.728 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.728 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.728 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.988 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:08.988 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.556 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.815 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.383 00:14:10.383 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.383 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.383 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.641 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.641 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.641 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.641 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.641 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.641 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.641 { 00:14:10.641 "cntlid": 81, 00:14:10.641 "qid": 0, 00:14:10.641 "state": "enabled", 00:14:10.641 "thread": "nvmf_tgt_poll_group_000", 00:14:10.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:10.641 "listen_address": { 00:14:10.641 "trtype": "TCP", 00:14:10.641 "adrfam": "IPv4", 00:14:10.641 "traddr": "10.0.0.3", 00:14:10.641 "trsvcid": "4420" 00:14:10.641 }, 00:14:10.641 "peer_address": { 00:14:10.641 "trtype": "TCP", 00:14:10.641 "adrfam": "IPv4", 00:14:10.641 "traddr": "10.0.0.1", 00:14:10.642 "trsvcid": "49510" 00:14:10.642 }, 00:14:10.642 "auth": { 00:14:10.642 "state": "completed", 00:14:10.642 "digest": "sha384", 00:14:10.642 "dhgroup": "ffdhe6144" 00:14:10.642 } 00:14:10.642 } 00:14:10.642 ]' 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.642 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.900 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:10.900 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:11.859 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.860 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.427 00:14:12.427 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.427 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.427 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.686 { 00:14:12.686 "cntlid": 83, 00:14:12.686 "qid": 0, 00:14:12.686 "state": "enabled", 00:14:12.686 "thread": "nvmf_tgt_poll_group_000", 00:14:12.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:12.686 "listen_address": { 00:14:12.686 "trtype": "TCP", 00:14:12.686 "adrfam": "IPv4", 00:14:12.686 "traddr": "10.0.0.3", 00:14:12.686 "trsvcid": "4420" 00:14:12.686 }, 00:14:12.686 "peer_address": { 00:14:12.686 "trtype": "TCP", 00:14:12.686 "adrfam": "IPv4", 00:14:12.686 "traddr": "10.0.0.1", 00:14:12.686 "trsvcid": "49542" 00:14:12.686 }, 00:14:12.686 "auth": { 00:14:12.686 "state": "completed", 00:14:12.686 "digest": "sha384", 00:14:12.686 "dhgroup": "ffdhe6144" 00:14:12.686 } 00:14:12.686 } 00:14:12.686 ]' 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:12.686 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.944 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.944 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.944 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.203 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:13.203 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.770 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.029 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.597 00:14:14.597 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.597 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.597 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.856 { 00:14:14.856 "cntlid": 85, 00:14:14.856 "qid": 0, 00:14:14.856 "state": "enabled", 00:14:14.856 "thread": "nvmf_tgt_poll_group_000", 00:14:14.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:14.856 "listen_address": { 00:14:14.856 "trtype": "TCP", 00:14:14.856 "adrfam": "IPv4", 00:14:14.856 "traddr": "10.0.0.3", 00:14:14.856 "trsvcid": "4420" 00:14:14.856 }, 00:14:14.856 "peer_address": { 00:14:14.856 "trtype": "TCP", 00:14:14.856 "adrfam": "IPv4", 00:14:14.856 "traddr": "10.0.0.1", 00:14:14.856 "trsvcid": "49570" 00:14:14.856 }, 00:14:14.856 "auth": { 00:14:14.856 "state": "completed", 00:14:14.856 "digest": "sha384", 00:14:14.856 "dhgroup": "ffdhe6144" 00:14:14.856 } 00:14:14.856 } 00:14:14.856 ]' 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.856 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.424 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:15.424 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:15.991 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.251 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.817 00:14:16.817 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.817 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.817 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.076 { 00:14:17.076 "cntlid": 87, 00:14:17.076 "qid": 0, 00:14:17.076 "state": "enabled", 00:14:17.076 "thread": "nvmf_tgt_poll_group_000", 00:14:17.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:17.076 "listen_address": { 00:14:17.076 "trtype": "TCP", 00:14:17.076 "adrfam": "IPv4", 00:14:17.076 "traddr": "10.0.0.3", 00:14:17.076 "trsvcid": "4420" 00:14:17.076 }, 00:14:17.076 "peer_address": { 00:14:17.076 "trtype": "TCP", 00:14:17.076 "adrfam": "IPv4", 00:14:17.076 "traddr": "10.0.0.1", 00:14:17.076 "trsvcid": "49606" 00:14:17.076 }, 00:14:17.076 "auth": { 00:14:17.076 "state": "completed", 00:14:17.076 "digest": "sha384", 00:14:17.076 "dhgroup": "ffdhe6144" 00:14:17.076 } 00:14:17.076 } 00:14:17.076 ]' 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:17.076 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.335 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.335 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.335 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.594 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:17.594 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:18.161 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:18.419 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.678 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.247 00:14:19.247 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.247 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.247 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.506 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.506 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.506 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.506 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.765 { 00:14:19.765 "cntlid": 89, 00:14:19.765 "qid": 0, 00:14:19.765 "state": "enabled", 00:14:19.765 "thread": "nvmf_tgt_poll_group_000", 00:14:19.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:19.765 "listen_address": { 00:14:19.765 "trtype": "TCP", 00:14:19.765 "adrfam": "IPv4", 00:14:19.765 "traddr": "10.0.0.3", 00:14:19.765 "trsvcid": "4420" 00:14:19.765 }, 00:14:19.765 "peer_address": { 00:14:19.765 "trtype": "TCP", 00:14:19.765 "adrfam": "IPv4", 00:14:19.765 "traddr": "10.0.0.1", 00:14:19.765 "trsvcid": "50350" 00:14:19.765 }, 00:14:19.765 "auth": { 00:14:19.765 "state": "completed", 00:14:19.765 "digest": "sha384", 00:14:19.765 "dhgroup": "ffdhe8192" 00:14:19.765 } 00:14:19.765 } 00:14:19.765 ]' 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.765 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.023 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:20.023 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:20.959 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.218 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.813 00:14:21.813 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.813 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.813 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.072 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.072 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.072 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.072 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.072 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.072 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.072 { 00:14:22.072 "cntlid": 91, 00:14:22.072 "qid": 0, 00:14:22.072 "state": "enabled", 00:14:22.072 "thread": "nvmf_tgt_poll_group_000", 00:14:22.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:22.072 "listen_address": { 00:14:22.072 "trtype": "TCP", 00:14:22.072 "adrfam": "IPv4", 00:14:22.072 "traddr": "10.0.0.3", 00:14:22.072 "trsvcid": "4420" 00:14:22.072 }, 00:14:22.072 "peer_address": { 00:14:22.072 "trtype": "TCP", 00:14:22.072 "adrfam": "IPv4", 00:14:22.072 "traddr": "10.0.0.1", 00:14:22.072 "trsvcid": "50380" 00:14:22.072 }, 00:14:22.072 "auth": { 00:14:22.072 "state": "completed", 00:14:22.072 "digest": "sha384", 00:14:22.072 "dhgroup": "ffdhe8192" 00:14:22.072 } 00:14:22.072 } 00:14:22.072 ]' 00:14:22.072 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.330 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.330 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.330 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:22.330 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.330 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.330 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.330 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.589 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:22.589 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.527 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.786 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.353 00:14:24.353 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.354 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.354 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.612 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.612 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.612 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.612 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.612 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.612 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.612 { 00:14:24.612 "cntlid": 93, 00:14:24.612 "qid": 0, 00:14:24.612 "state": "enabled", 00:14:24.612 "thread": "nvmf_tgt_poll_group_000", 00:14:24.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:24.612 "listen_address": { 00:14:24.612 "trtype": "TCP", 00:14:24.612 "adrfam": "IPv4", 00:14:24.612 "traddr": "10.0.0.3", 00:14:24.612 "trsvcid": "4420" 00:14:24.612 }, 00:14:24.612 "peer_address": { 00:14:24.612 "trtype": "TCP", 00:14:24.612 "adrfam": "IPv4", 00:14:24.612 "traddr": "10.0.0.1", 00:14:24.612 "trsvcid": "50402" 00:14:24.612 }, 00:14:24.612 "auth": { 00:14:24.612 "state": "completed", 00:14:24.612 "digest": "sha384", 00:14:24.612 "dhgroup": "ffdhe8192" 00:14:24.612 } 00:14:24.612 } 00:14:24.612 ]' 00:14:24.612 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.872 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.872 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.872 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.872 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.872 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.872 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.872 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.131 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:25.131 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.067 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.326 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.326 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.326 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.326 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.893 00:14:26.893 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.893 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.893 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.177 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.177 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.177 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.177 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.177 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.177 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.177 { 00:14:27.177 "cntlid": 95, 00:14:27.177 "qid": 0, 00:14:27.177 "state": "enabled", 00:14:27.177 "thread": "nvmf_tgt_poll_group_000", 00:14:27.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:27.177 "listen_address": { 00:14:27.177 "trtype": "TCP", 00:14:27.177 "adrfam": "IPv4", 00:14:27.177 "traddr": "10.0.0.3", 00:14:27.177 "trsvcid": "4420" 00:14:27.177 }, 00:14:27.177 "peer_address": { 00:14:27.177 "trtype": "TCP", 00:14:27.177 "adrfam": "IPv4", 00:14:27.177 "traddr": "10.0.0.1", 00:14:27.177 "trsvcid": "50424" 00:14:27.177 }, 00:14:27.177 "auth": { 00:14:27.177 "state": "completed", 00:14:27.177 "digest": "sha384", 00:14:27.177 "dhgroup": "ffdhe8192" 00:14:27.177 } 00:14:27.177 } 00:14:27.177 ]' 00:14:27.177 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.437 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.437 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.437 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:27.437 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.437 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.437 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.437 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.696 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:27.696 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.634 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.202 00:14:29.202 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.202 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.202 17:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.460 { 00:14:29.460 "cntlid": 97, 00:14:29.460 "qid": 0, 00:14:29.460 "state": "enabled", 00:14:29.460 "thread": "nvmf_tgt_poll_group_000", 00:14:29.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:29.460 "listen_address": { 00:14:29.460 "trtype": "TCP", 00:14:29.460 "adrfam": "IPv4", 00:14:29.460 "traddr": "10.0.0.3", 00:14:29.460 "trsvcid": "4420" 00:14:29.460 }, 00:14:29.460 "peer_address": { 00:14:29.460 "trtype": "TCP", 00:14:29.460 "adrfam": "IPv4", 00:14:29.460 "traddr": "10.0.0.1", 00:14:29.460 "trsvcid": "54996" 00:14:29.460 }, 00:14:29.460 "auth": { 00:14:29.460 "state": "completed", 00:14:29.460 "digest": "sha512", 00:14:29.460 "dhgroup": "null" 00:14:29.460 } 00:14:29.460 } 00:14:29.460 ]' 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.460 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.027 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:30.027 17:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.594 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.160 17:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.419 00:14:31.419 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.419 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.419 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.677 { 00:14:31.677 "cntlid": 99, 00:14:31.677 "qid": 0, 00:14:31.677 "state": "enabled", 00:14:31.677 "thread": "nvmf_tgt_poll_group_000", 00:14:31.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:31.677 "listen_address": { 00:14:31.677 "trtype": "TCP", 00:14:31.677 "adrfam": "IPv4", 00:14:31.677 "traddr": "10.0.0.3", 00:14:31.677 "trsvcid": "4420" 00:14:31.677 }, 00:14:31.677 "peer_address": { 00:14:31.677 "trtype": "TCP", 00:14:31.677 "adrfam": "IPv4", 00:14:31.677 "traddr": "10.0.0.1", 00:14:31.677 "trsvcid": "55018" 00:14:31.677 }, 00:14:31.677 "auth": { 00:14:31.677 "state": "completed", 00:14:31.677 "digest": "sha512", 00:14:31.677 "dhgroup": "null" 00:14:31.677 } 00:14:31.677 } 00:14:31.677 ]' 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:31.677 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.935 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.935 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.935 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.193 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:32.193 17:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:32.759 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.326 17:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.620 00:14:33.620 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.620 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.620 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.878 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.878 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.878 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.878 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.878 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.878 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.878 { 00:14:33.878 "cntlid": 101, 00:14:33.878 "qid": 0, 00:14:33.878 "state": "enabled", 00:14:33.878 "thread": "nvmf_tgt_poll_group_000", 00:14:33.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:33.878 "listen_address": { 00:14:33.879 "trtype": "TCP", 00:14:33.879 "adrfam": "IPv4", 00:14:33.879 "traddr": "10.0.0.3", 00:14:33.879 "trsvcid": "4420" 00:14:33.879 }, 00:14:33.879 "peer_address": { 00:14:33.879 "trtype": "TCP", 00:14:33.879 "adrfam": "IPv4", 00:14:33.879 "traddr": "10.0.0.1", 00:14:33.879 "trsvcid": "55044" 00:14:33.879 }, 00:14:33.879 "auth": { 00:14:33.879 "state": "completed", 00:14:33.879 "digest": "sha512", 00:14:33.879 "dhgroup": "null" 00:14:33.879 } 00:14:33.879 } 00:14:33.879 ]' 00:14:33.879 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.879 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.879 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.879 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:33.879 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.137 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.137 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.137 17:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.396 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:34.396 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:34.965 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.224 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.791 00:14:35.791 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.791 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.791 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.049 { 00:14:36.049 "cntlid": 103, 00:14:36.049 "qid": 0, 00:14:36.049 "state": "enabled", 00:14:36.049 "thread": "nvmf_tgt_poll_group_000", 00:14:36.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:36.049 "listen_address": { 00:14:36.049 "trtype": "TCP", 00:14:36.049 "adrfam": "IPv4", 00:14:36.049 "traddr": "10.0.0.3", 00:14:36.049 "trsvcid": "4420" 00:14:36.049 }, 00:14:36.049 "peer_address": { 00:14:36.049 "trtype": "TCP", 00:14:36.049 "adrfam": "IPv4", 00:14:36.049 "traddr": "10.0.0.1", 00:14:36.049 "trsvcid": "55070" 00:14:36.049 }, 00:14:36.049 "auth": { 00:14:36.049 "state": "completed", 00:14:36.049 "digest": "sha512", 00:14:36.049 "dhgroup": "null" 00:14:36.049 } 00:14:36.049 } 00:14:36.049 ]' 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.049 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.307 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:36.307 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.243 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.243 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.818 00:14:37.818 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.818 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.818 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.105 { 00:14:38.105 "cntlid": 105, 00:14:38.105 "qid": 0, 00:14:38.105 "state": "enabled", 00:14:38.105 "thread": "nvmf_tgt_poll_group_000", 00:14:38.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:38.105 "listen_address": { 00:14:38.105 "trtype": "TCP", 00:14:38.105 "adrfam": "IPv4", 00:14:38.105 "traddr": "10.0.0.3", 00:14:38.105 "trsvcid": "4420" 00:14:38.105 }, 00:14:38.105 "peer_address": { 00:14:38.105 "trtype": "TCP", 00:14:38.105 "adrfam": "IPv4", 00:14:38.105 "traddr": "10.0.0.1", 00:14:38.105 "trsvcid": "55108" 00:14:38.105 }, 00:14:38.105 "auth": { 00:14:38.105 "state": "completed", 00:14:38.105 "digest": "sha512", 00:14:38.105 "dhgroup": "ffdhe2048" 00:14:38.105 } 00:14:38.105 } 00:14:38.105 ]' 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.105 17:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.671 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:38.671 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.237 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.495 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:39.495 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.495 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.495 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:39.495 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.495 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.495 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.496 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.496 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.496 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.496 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.496 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.496 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.754 00:14:39.754 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.754 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.754 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.013 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.013 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.013 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.013 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.272 { 00:14:40.272 "cntlid": 107, 00:14:40.272 "qid": 0, 00:14:40.272 "state": "enabled", 00:14:40.272 "thread": "nvmf_tgt_poll_group_000", 00:14:40.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:40.272 "listen_address": { 00:14:40.272 "trtype": "TCP", 00:14:40.272 "adrfam": "IPv4", 00:14:40.272 "traddr": "10.0.0.3", 00:14:40.272 "trsvcid": "4420" 00:14:40.272 }, 00:14:40.272 "peer_address": { 00:14:40.272 "trtype": "TCP", 00:14:40.272 "adrfam": "IPv4", 00:14:40.272 "traddr": "10.0.0.1", 00:14:40.272 "trsvcid": "51956" 00:14:40.272 }, 00:14:40.272 "auth": { 00:14:40.272 "state": "completed", 00:14:40.272 "digest": "sha512", 00:14:40.272 "dhgroup": "ffdhe2048" 00:14:40.272 } 00:14:40.272 } 00:14:40.272 ]' 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.272 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.531 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:40.531 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.466 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.724 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.983 00:14:41.983 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.983 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.983 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.247 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.247 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.247 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.247 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.247 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.247 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.247 { 00:14:42.247 "cntlid": 109, 00:14:42.247 "qid": 0, 00:14:42.247 "state": "enabled", 00:14:42.247 "thread": "nvmf_tgt_poll_group_000", 00:14:42.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:42.247 "listen_address": { 00:14:42.247 "trtype": "TCP", 00:14:42.247 "adrfam": "IPv4", 00:14:42.247 "traddr": "10.0.0.3", 00:14:42.247 "trsvcid": "4420" 00:14:42.247 }, 00:14:42.247 "peer_address": { 00:14:42.247 "trtype": "TCP", 00:14:42.247 "adrfam": "IPv4", 00:14:42.247 "traddr": "10.0.0.1", 00:14:42.247 "trsvcid": "51982" 00:14:42.247 }, 00:14:42.247 "auth": { 00:14:42.247 "state": "completed", 00:14:42.247 "digest": "sha512", 00:14:42.247 "dhgroup": "ffdhe2048" 00:14:42.247 } 00:14:42.247 } 00:14:42.247 ]' 00:14:42.247 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.247 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.247 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.528 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.528 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.528 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.528 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.528 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.787 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:42.787 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:43.353 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.353 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:43.353 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.353 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.353 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.353 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.353 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:43.353 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.612 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.870 00:14:43.870 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.870 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.870 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.438 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.438 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.438 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.438 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.438 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.438 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.438 { 00:14:44.438 "cntlid": 111, 00:14:44.438 "qid": 0, 00:14:44.438 "state": "enabled", 00:14:44.438 "thread": "nvmf_tgt_poll_group_000", 00:14:44.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:44.438 "listen_address": { 00:14:44.438 "trtype": "TCP", 00:14:44.438 "adrfam": "IPv4", 00:14:44.438 "traddr": "10.0.0.3", 00:14:44.438 "trsvcid": "4420" 00:14:44.438 }, 00:14:44.438 "peer_address": { 00:14:44.438 "trtype": "TCP", 00:14:44.438 "adrfam": "IPv4", 00:14:44.438 "traddr": "10.0.0.1", 00:14:44.438 "trsvcid": "52002" 00:14:44.438 }, 00:14:44.438 "auth": { 00:14:44.438 "state": "completed", 00:14:44.438 "digest": "sha512", 00:14:44.438 "dhgroup": "ffdhe2048" 00:14:44.438 } 00:14:44.438 } 00:14:44.438 ]' 00:14:44.438 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.438 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.438 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.438 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:44.438 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.438 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.438 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.438 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.696 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:44.696 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:45.263 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.263 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.521 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.089 00:14:46.089 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.089 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.089 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.347 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.347 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.347 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.347 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.347 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.347 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.347 { 00:14:46.348 "cntlid": 113, 00:14:46.348 "qid": 0, 00:14:46.348 "state": "enabled", 00:14:46.348 "thread": "nvmf_tgt_poll_group_000", 00:14:46.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:46.348 "listen_address": { 00:14:46.348 "trtype": "TCP", 00:14:46.348 "adrfam": "IPv4", 00:14:46.348 "traddr": "10.0.0.3", 00:14:46.348 "trsvcid": "4420" 00:14:46.348 }, 00:14:46.348 "peer_address": { 00:14:46.348 "trtype": "TCP", 00:14:46.348 "adrfam": "IPv4", 00:14:46.348 "traddr": "10.0.0.1", 00:14:46.348 "trsvcid": "52042" 00:14:46.348 }, 00:14:46.348 "auth": { 00:14:46.348 "state": "completed", 00:14:46.348 "digest": "sha512", 00:14:46.348 "dhgroup": "ffdhe3072" 00:14:46.348 } 00:14:46.348 } 00:14:46.348 ]' 00:14:46.348 17:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.348 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.348 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.348 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.348 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.348 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.348 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.348 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.606 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:46.606 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.546 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.113 00:14:48.113 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.113 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.113 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.371 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.371 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.371 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.371 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.371 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.371 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.371 { 00:14:48.371 "cntlid": 115, 00:14:48.371 "qid": 0, 00:14:48.371 "state": "enabled", 00:14:48.371 "thread": "nvmf_tgt_poll_group_000", 00:14:48.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:48.371 "listen_address": { 00:14:48.371 "trtype": "TCP", 00:14:48.371 "adrfam": "IPv4", 00:14:48.371 "traddr": "10.0.0.3", 00:14:48.371 "trsvcid": "4420" 00:14:48.371 }, 00:14:48.371 "peer_address": { 00:14:48.371 "trtype": "TCP", 00:14:48.371 "adrfam": "IPv4", 00:14:48.371 "traddr": "10.0.0.1", 00:14:48.371 "trsvcid": "52068" 00:14:48.371 }, 00:14:48.371 "auth": { 00:14:48.371 "state": "completed", 00:14:48.371 "digest": "sha512", 00:14:48.371 "dhgroup": "ffdhe3072" 00:14:48.371 } 00:14:48.371 } 00:14:48.371 ]' 00:14:48.371 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.371 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.371 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.371 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.371 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.371 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.371 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.371 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.630 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:48.630 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.196 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.455 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.021 00:14:50.021 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.021 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.021 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.279 { 00:14:50.279 "cntlid": 117, 00:14:50.279 "qid": 0, 00:14:50.279 "state": "enabled", 00:14:50.279 "thread": "nvmf_tgt_poll_group_000", 00:14:50.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:50.279 "listen_address": { 00:14:50.279 "trtype": "TCP", 00:14:50.279 "adrfam": "IPv4", 00:14:50.279 "traddr": "10.0.0.3", 00:14:50.279 "trsvcid": "4420" 00:14:50.279 }, 00:14:50.279 "peer_address": { 00:14:50.279 "trtype": "TCP", 00:14:50.279 "adrfam": "IPv4", 00:14:50.279 "traddr": "10.0.0.1", 00:14:50.279 "trsvcid": "55752" 00:14:50.279 }, 00:14:50.279 "auth": { 00:14:50.279 "state": "completed", 00:14:50.279 "digest": "sha512", 00:14:50.279 "dhgroup": "ffdhe3072" 00:14:50.279 } 00:14:50.279 } 00:14:50.279 ]' 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.279 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.279 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:50.279 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.279 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.279 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.279 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.846 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:50.846 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:51.412 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.671 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.264 00:14:52.264 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.264 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.264 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.521 { 00:14:52.521 "cntlid": 119, 00:14:52.521 "qid": 0, 00:14:52.521 "state": "enabled", 00:14:52.521 "thread": "nvmf_tgt_poll_group_000", 00:14:52.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:52.521 "listen_address": { 00:14:52.521 "trtype": "TCP", 00:14:52.521 "adrfam": "IPv4", 00:14:52.521 "traddr": "10.0.0.3", 00:14:52.521 "trsvcid": "4420" 00:14:52.521 }, 00:14:52.521 "peer_address": { 00:14:52.521 "trtype": "TCP", 00:14:52.521 "adrfam": "IPv4", 00:14:52.521 "traddr": "10.0.0.1", 00:14:52.521 "trsvcid": "55780" 00:14:52.521 }, 00:14:52.521 "auth": { 00:14:52.521 "state": "completed", 00:14:52.521 "digest": "sha512", 00:14:52.521 "dhgroup": "ffdhe3072" 00:14:52.521 } 00:14:52.521 } 00:14:52.521 ]' 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:52.521 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.779 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.779 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.779 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.038 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:53.038 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.605 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.863 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.122 00:14:54.122 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.122 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.122 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.689 { 00:14:54.689 "cntlid": 121, 00:14:54.689 "qid": 0, 00:14:54.689 "state": "enabled", 00:14:54.689 "thread": "nvmf_tgt_poll_group_000", 00:14:54.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:54.689 "listen_address": { 00:14:54.689 "trtype": "TCP", 00:14:54.689 "adrfam": "IPv4", 00:14:54.689 "traddr": "10.0.0.3", 00:14:54.689 "trsvcid": "4420" 00:14:54.689 }, 00:14:54.689 "peer_address": { 00:14:54.689 "trtype": "TCP", 00:14:54.689 "adrfam": "IPv4", 00:14:54.689 "traddr": "10.0.0.1", 00:14:54.689 "trsvcid": "55798" 00:14:54.689 }, 00:14:54.689 "auth": { 00:14:54.689 "state": "completed", 00:14:54.689 "digest": "sha512", 00:14:54.689 "dhgroup": "ffdhe4096" 00:14:54.689 } 00:14:54.689 } 00:14:54.689 ]' 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.689 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.947 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:54.947 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:14:55.514 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.782 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:55.783 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.783 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.783 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.783 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.783 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:55.783 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.041 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.299 00:14:56.299 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.299 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.299 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.559 { 00:14:56.559 "cntlid": 123, 00:14:56.559 "qid": 0, 00:14:56.559 "state": "enabled", 00:14:56.559 "thread": "nvmf_tgt_poll_group_000", 00:14:56.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:56.559 "listen_address": { 00:14:56.559 "trtype": "TCP", 00:14:56.559 "adrfam": "IPv4", 00:14:56.559 "traddr": "10.0.0.3", 00:14:56.559 "trsvcid": "4420" 00:14:56.559 }, 00:14:56.559 "peer_address": { 00:14:56.559 "trtype": "TCP", 00:14:56.559 "adrfam": "IPv4", 00:14:56.559 "traddr": "10.0.0.1", 00:14:56.559 "trsvcid": "55822" 00:14:56.559 }, 00:14:56.559 "auth": { 00:14:56.559 "state": "completed", 00:14:56.559 "digest": "sha512", 00:14:56.559 "dhgroup": "ffdhe4096" 00:14:56.559 } 00:14:56.559 } 00:14:56.559 ]' 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.559 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.818 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.818 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.818 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.077 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:57.078 17:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:57.646 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.904 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.162 00:14:58.162 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.162 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.162 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.422 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.422 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.422 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.422 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.422 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.422 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.422 { 00:14:58.422 "cntlid": 125, 00:14:58.422 "qid": 0, 00:14:58.422 "state": "enabled", 00:14:58.422 "thread": "nvmf_tgt_poll_group_000", 00:14:58.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:14:58.422 "listen_address": { 00:14:58.422 "trtype": "TCP", 00:14:58.422 "adrfam": "IPv4", 00:14:58.422 "traddr": "10.0.0.3", 00:14:58.422 "trsvcid": "4420" 00:14:58.422 }, 00:14:58.422 "peer_address": { 00:14:58.422 "trtype": "TCP", 00:14:58.422 "adrfam": "IPv4", 00:14:58.422 "traddr": "10.0.0.1", 00:14:58.422 "trsvcid": "55840" 00:14:58.422 }, 00:14:58.422 "auth": { 00:14:58.422 "state": "completed", 00:14:58.422 "digest": "sha512", 00:14:58.422 "dhgroup": "ffdhe4096" 00:14:58.422 } 00:14:58.422 } 00:14:58.422 ]' 00:14:58.422 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.680 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.680 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.680 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.681 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.681 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.681 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.681 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.939 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:58.939 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:14:59.504 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.504 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:14:59.504 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.504 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.505 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.505 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.505 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.505 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.763 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.330 00:15:00.330 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.330 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.330 17:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.588 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.589 { 00:15:00.589 "cntlid": 127, 00:15:00.589 "qid": 0, 00:15:00.589 "state": "enabled", 00:15:00.589 "thread": "nvmf_tgt_poll_group_000", 00:15:00.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:00.589 "listen_address": { 00:15:00.589 "trtype": "TCP", 00:15:00.589 "adrfam": "IPv4", 00:15:00.589 "traddr": "10.0.0.3", 00:15:00.589 "trsvcid": "4420" 00:15:00.589 }, 00:15:00.589 "peer_address": { 00:15:00.589 "trtype": "TCP", 00:15:00.589 "adrfam": "IPv4", 00:15:00.589 "traddr": "10.0.0.1", 00:15:00.589 "trsvcid": "55992" 00:15:00.589 }, 00:15:00.589 "auth": { 00:15:00.589 "state": "completed", 00:15:00.589 "digest": "sha512", 00:15:00.589 "dhgroup": "ffdhe4096" 00:15:00.589 } 00:15:00.589 } 00:15:00.589 ]' 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.589 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.847 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:00.847 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.792 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.368 00:15:02.368 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.368 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.368 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.626 { 00:15:02.626 "cntlid": 129, 00:15:02.626 "qid": 0, 00:15:02.626 "state": "enabled", 00:15:02.626 "thread": "nvmf_tgt_poll_group_000", 00:15:02.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:02.626 "listen_address": { 00:15:02.626 "trtype": "TCP", 00:15:02.626 "adrfam": "IPv4", 00:15:02.626 "traddr": "10.0.0.3", 00:15:02.626 "trsvcid": "4420" 00:15:02.626 }, 00:15:02.626 "peer_address": { 00:15:02.626 "trtype": "TCP", 00:15:02.626 "adrfam": "IPv4", 00:15:02.626 "traddr": "10.0.0.1", 00:15:02.626 "trsvcid": "56028" 00:15:02.626 }, 00:15:02.626 "auth": { 00:15:02.626 "state": "completed", 00:15:02.626 "digest": "sha512", 00:15:02.626 "dhgroup": "ffdhe6144" 00:15:02.626 } 00:15:02.626 } 00:15:02.626 ]' 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.626 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.198 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:15:03.198 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.778 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.039 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.040 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.298 00:15:04.557 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.557 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.557 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.557 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.815 { 00:15:04.815 "cntlid": 131, 00:15:04.815 "qid": 0, 00:15:04.815 "state": "enabled", 00:15:04.815 "thread": "nvmf_tgt_poll_group_000", 00:15:04.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:04.815 "listen_address": { 00:15:04.815 "trtype": "TCP", 00:15:04.815 "adrfam": "IPv4", 00:15:04.815 "traddr": "10.0.0.3", 00:15:04.815 "trsvcid": "4420" 00:15:04.815 }, 00:15:04.815 "peer_address": { 00:15:04.815 "trtype": "TCP", 00:15:04.815 "adrfam": "IPv4", 00:15:04.815 "traddr": "10.0.0.1", 00:15:04.815 "trsvcid": "56052" 00:15:04.815 }, 00:15:04.815 "auth": { 00:15:04.815 "state": "completed", 00:15:04.815 "digest": "sha512", 00:15:04.815 "dhgroup": "ffdhe6144" 00:15:04.815 } 00:15:04.815 } 00:15:04.815 ]' 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.815 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.073 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:15:05.074 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.641 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.903 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.470 00:15:06.470 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.470 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.470 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.728 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.728 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.728 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.728 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.728 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.728 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.728 { 00:15:06.728 "cntlid": 133, 00:15:06.728 "qid": 0, 00:15:06.728 "state": "enabled", 00:15:06.728 "thread": "nvmf_tgt_poll_group_000", 00:15:06.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:06.728 "listen_address": { 00:15:06.728 "trtype": "TCP", 00:15:06.728 "adrfam": "IPv4", 00:15:06.728 "traddr": "10.0.0.3", 00:15:06.728 "trsvcid": "4420" 00:15:06.728 }, 00:15:06.728 "peer_address": { 00:15:06.728 "trtype": "TCP", 00:15:06.728 "adrfam": "IPv4", 00:15:06.728 "traddr": "10.0.0.1", 00:15:06.728 "trsvcid": "56084" 00:15:06.728 }, 00:15:06.728 "auth": { 00:15:06.728 "state": "completed", 00:15:06.728 "digest": "sha512", 00:15:06.728 "dhgroup": "ffdhe6144" 00:15:06.728 } 00:15:06.728 } 00:15:06.728 ]' 00:15:06.728 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.987 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.987 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.987 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:06.987 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.987 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.987 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.987 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.245 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:15:07.245 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:15:07.811 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.069 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:08.069 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.069 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.069 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.069 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.069 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.069 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.328 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.895 00:15:08.895 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.895 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.895 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.153 { 00:15:09.153 "cntlid": 135, 00:15:09.153 "qid": 0, 00:15:09.153 "state": "enabled", 00:15:09.153 "thread": "nvmf_tgt_poll_group_000", 00:15:09.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:09.153 "listen_address": { 00:15:09.153 "trtype": "TCP", 00:15:09.153 "adrfam": "IPv4", 00:15:09.153 "traddr": "10.0.0.3", 00:15:09.153 "trsvcid": "4420" 00:15:09.153 }, 00:15:09.153 "peer_address": { 00:15:09.153 "trtype": "TCP", 00:15:09.153 "adrfam": "IPv4", 00:15:09.153 "traddr": "10.0.0.1", 00:15:09.153 "trsvcid": "56098" 00:15:09.153 }, 00:15:09.153 "auth": { 00:15:09.153 "state": "completed", 00:15:09.153 "digest": "sha512", 00:15:09.153 "dhgroup": "ffdhe6144" 00:15:09.153 } 00:15:09.153 } 00:15:09.153 ]' 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.153 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.411 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:09.411 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:10.345 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.345 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:10.345 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.346 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.346 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.346 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.346 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.346 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.346 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.346 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.915 00:15:11.174 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.174 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.174 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.432 { 00:15:11.432 "cntlid": 137, 00:15:11.432 "qid": 0, 00:15:11.432 "state": "enabled", 00:15:11.432 "thread": "nvmf_tgt_poll_group_000", 00:15:11.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:11.432 "listen_address": { 00:15:11.432 "trtype": "TCP", 00:15:11.432 "adrfam": "IPv4", 00:15:11.432 "traddr": "10.0.0.3", 00:15:11.432 "trsvcid": "4420" 00:15:11.432 }, 00:15:11.432 "peer_address": { 00:15:11.432 "trtype": "TCP", 00:15:11.432 "adrfam": "IPv4", 00:15:11.432 "traddr": "10.0.0.1", 00:15:11.432 "trsvcid": "44140" 00:15:11.432 }, 00:15:11.432 "auth": { 00:15:11.432 "state": "completed", 00:15:11.432 "digest": "sha512", 00:15:11.432 "dhgroup": "ffdhe8192" 00:15:11.432 } 00:15:11.432 } 00:15:11.432 ]' 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.432 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.000 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:15:12.000 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.567 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.826 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.393 00:15:13.393 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.393 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.393 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.652 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.652 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.652 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.652 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.911 { 00:15:13.911 "cntlid": 139, 00:15:13.911 "qid": 0, 00:15:13.911 "state": "enabled", 00:15:13.911 "thread": "nvmf_tgt_poll_group_000", 00:15:13.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:13.911 "listen_address": { 00:15:13.911 "trtype": "TCP", 00:15:13.911 "adrfam": "IPv4", 00:15:13.911 "traddr": "10.0.0.3", 00:15:13.911 "trsvcid": "4420" 00:15:13.911 }, 00:15:13.911 "peer_address": { 00:15:13.911 "trtype": "TCP", 00:15:13.911 "adrfam": "IPv4", 00:15:13.911 "traddr": "10.0.0.1", 00:15:13.911 "trsvcid": "44168" 00:15:13.911 }, 00:15:13.911 "auth": { 00:15:13.911 "state": "completed", 00:15:13.911 "digest": "sha512", 00:15:13.911 "dhgroup": "ffdhe8192" 00:15:13.911 } 00:15:13.911 } 00:15:13.911 ]' 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.911 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.170 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:15:14.170 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: --dhchap-ctrl-secret DHHC-1:02:NDI5OTAzNjFkMTJjYmU2NmM2YzlmZGZiZGZlZjZmZWU5MmM5MmVkNDhhNTVlMDFkhJDZiQ==: 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.115 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.051 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.051 { 00:15:16.051 "cntlid": 141, 00:15:16.051 "qid": 0, 00:15:16.051 "state": "enabled", 00:15:16.051 "thread": "nvmf_tgt_poll_group_000", 00:15:16.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:16.051 "listen_address": { 00:15:16.051 "trtype": "TCP", 00:15:16.051 "adrfam": "IPv4", 00:15:16.051 "traddr": "10.0.0.3", 00:15:16.051 "trsvcid": "4420" 00:15:16.051 }, 00:15:16.051 "peer_address": { 00:15:16.051 "trtype": "TCP", 00:15:16.051 "adrfam": "IPv4", 00:15:16.051 "traddr": "10.0.0.1", 00:15:16.051 "trsvcid": "44198" 00:15:16.051 }, 00:15:16.051 "auth": { 00:15:16.051 "state": "completed", 00:15:16.051 "digest": "sha512", 00:15:16.051 "dhgroup": "ffdhe8192" 00:15:16.051 } 00:15:16.051 } 00:15:16.051 ]' 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.051 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.309 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.309 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.309 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.309 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.309 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.568 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:15:16.568 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:01:ZjJhMjZhN2RjNjY1NmQzNmRiOGY0NTc0Njg5ZTU0YmTacq2v: 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.136 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.395 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.962 00:15:18.221 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.221 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.221 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.221 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.479 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.479 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.479 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.479 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.479 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.479 { 00:15:18.479 "cntlid": 143, 00:15:18.480 "qid": 0, 00:15:18.480 "state": "enabled", 00:15:18.480 "thread": "nvmf_tgt_poll_group_000", 00:15:18.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:18.480 "listen_address": { 00:15:18.480 "trtype": "TCP", 00:15:18.480 "adrfam": "IPv4", 00:15:18.480 "traddr": "10.0.0.3", 00:15:18.480 "trsvcid": "4420" 00:15:18.480 }, 00:15:18.480 "peer_address": { 00:15:18.480 "trtype": "TCP", 00:15:18.480 "adrfam": "IPv4", 00:15:18.480 "traddr": "10.0.0.1", 00:15:18.480 "trsvcid": "44224" 00:15:18.480 }, 00:15:18.480 "auth": { 00:15:18.480 "state": "completed", 00:15:18.480 "digest": "sha512", 00:15:18.480 "dhgroup": "ffdhe8192" 00:15:18.480 } 00:15:18.480 } 00:15:18.480 ]' 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.480 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.739 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:18.739 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:19.674 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.675 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.243 00:15:20.243 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.243 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.243 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.502 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.502 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.502 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.502 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.502 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.502 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.502 { 00:15:20.502 "cntlid": 145, 00:15:20.502 "qid": 0, 00:15:20.502 "state": "enabled", 00:15:20.502 "thread": "nvmf_tgt_poll_group_000", 00:15:20.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:20.502 "listen_address": { 00:15:20.502 "trtype": "TCP", 00:15:20.502 "adrfam": "IPv4", 00:15:20.502 "traddr": "10.0.0.3", 00:15:20.502 "trsvcid": "4420" 00:15:20.502 }, 00:15:20.502 "peer_address": { 00:15:20.502 "trtype": "TCP", 00:15:20.502 "adrfam": "IPv4", 00:15:20.502 "traddr": "10.0.0.1", 00:15:20.502 "trsvcid": "58370" 00:15:20.502 }, 00:15:20.502 "auth": { 00:15:20.502 "state": "completed", 00:15:20.502 "digest": "sha512", 00:15:20.502 "dhgroup": "ffdhe8192" 00:15:20.502 } 00:15:20.502 } 00:15:20.502 ]' 00:15:20.502 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.760 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.760 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.760 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.760 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.760 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.760 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.760 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.019 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:15:21.019 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:00:MDFkMzg0NWI2ZDA4YzJlNjUxMDEyYjJlNzhhNmY5ZGY4NjQxNjZiOTBmZDhhOTBjz3NWNw==: --dhchap-ctrl-secret DHHC-1:03:MWFlYTEzY2MyNTViZjkxYmFjYjMwYTQ1MWMwMzk5MjQ1MWZiMjZmYzE5ODgzYTJiY2MyYTdmMTc0MzMzZTg0ZFiImMM=: 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:21.592 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:22.528 request: 00:15:22.528 { 00:15:22.528 "name": "nvme0", 00:15:22.528 "trtype": "tcp", 00:15:22.528 "traddr": "10.0.0.3", 00:15:22.528 "adrfam": "ipv4", 00:15:22.528 "trsvcid": "4420", 00:15:22.528 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:22.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:22.528 "prchk_reftag": false, 00:15:22.528 "prchk_guard": false, 00:15:22.528 "hdgst": false, 00:15:22.528 "ddgst": false, 00:15:22.528 "dhchap_key": "key2", 00:15:22.528 "allow_unrecognized_csi": false, 00:15:22.528 "method": "bdev_nvme_attach_controller", 00:15:22.528 "req_id": 1 00:15:22.528 } 00:15:22.528 Got JSON-RPC error response 00:15:22.528 response: 00:15:22.528 { 00:15:22.528 "code": -5, 00:15:22.528 "message": "Input/output error" 00:15:22.528 } 00:15:22.528 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:22.528 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:22.528 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:22.528 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:22.528 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:22.528 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.528 17:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:22.528 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:22.529 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:23.096 request: 00:15:23.096 { 00:15:23.096 "name": "nvme0", 00:15:23.096 "trtype": "tcp", 00:15:23.096 "traddr": "10.0.0.3", 00:15:23.096 "adrfam": "ipv4", 00:15:23.096 "trsvcid": "4420", 00:15:23.096 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:23.096 "prchk_reftag": false, 00:15:23.096 "prchk_guard": false, 00:15:23.096 "hdgst": false, 00:15:23.096 "ddgst": false, 00:15:23.096 "dhchap_key": "key1", 00:15:23.096 "dhchap_ctrlr_key": "ckey2", 00:15:23.096 "allow_unrecognized_csi": false, 00:15:23.096 "method": "bdev_nvme_attach_controller", 00:15:23.096 "req_id": 1 00:15:23.096 } 00:15:23.096 Got JSON-RPC error response 00:15:23.096 response: 00:15:23.096 { 00:15:23.096 "code": -5, 00:15:23.096 "message": "Input/output error" 00:15:23.096 } 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.096 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.663 request: 00:15:23.663 { 00:15:23.663 "name": "nvme0", 00:15:23.663 "trtype": "tcp", 00:15:23.663 "traddr": "10.0.0.3", 00:15:23.663 "adrfam": "ipv4", 00:15:23.663 "trsvcid": "4420", 00:15:23.663 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:23.663 "prchk_reftag": false, 00:15:23.663 "prchk_guard": false, 00:15:23.663 "hdgst": false, 00:15:23.663 "ddgst": false, 00:15:23.663 "dhchap_key": "key1", 00:15:23.663 "dhchap_ctrlr_key": "ckey1", 00:15:23.663 "allow_unrecognized_csi": false, 00:15:23.663 "method": "bdev_nvme_attach_controller", 00:15:23.663 "req_id": 1 00:15:23.663 } 00:15:23.663 Got JSON-RPC error response 00:15:23.663 response: 00:15:23.663 { 00:15:23.663 "code": -5, 00:15:23.663 "message": "Input/output error" 00:15:23.663 } 00:15:23.663 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:23.663 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:23.663 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:23.663 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:23.663 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:23.663 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67176 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67176 ']' 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67176 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67176 00:15:23.664 killing process with pid 67176 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67176' 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67176 00:15:23.664 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67176 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70269 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70269 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70269 ']' 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:23.923 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70269 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70269 ']' 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:24.208 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.467 null0 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MUs 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.467 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Moq ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Moq 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xbx 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.iO5 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iO5 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xad 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.9bU ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9bU 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EMm 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.726 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.661 nvme0n1 00:15:25.661 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.661 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.661 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.920 { 00:15:25.920 "cntlid": 1, 00:15:25.920 "qid": 0, 00:15:25.920 "state": "enabled", 00:15:25.920 "thread": "nvmf_tgt_poll_group_000", 00:15:25.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:25.920 "listen_address": { 00:15:25.920 "trtype": "TCP", 00:15:25.920 "adrfam": "IPv4", 00:15:25.920 "traddr": "10.0.0.3", 00:15:25.920 "trsvcid": "4420" 00:15:25.920 }, 00:15:25.920 "peer_address": { 00:15:25.920 "trtype": "TCP", 00:15:25.920 "adrfam": "IPv4", 00:15:25.920 "traddr": "10.0.0.1", 00:15:25.920 "trsvcid": "58424" 00:15:25.920 }, 00:15:25.920 "auth": { 00:15:25.920 "state": "completed", 00:15:25.920 "digest": "sha512", 00:15:25.920 "dhgroup": "ffdhe8192" 00:15:25.920 } 00:15:25.920 } 00:15:25.920 ]' 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.920 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.178 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.178 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.178 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.437 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:26.437 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key3 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:27.004 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:27.263 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:27.263 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:27.263 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:27.263 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:27.263 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.263 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:27.521 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:27.521 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.521 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.521 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.780 request: 00:15:27.780 { 00:15:27.780 "name": "nvme0", 00:15:27.780 "trtype": "tcp", 00:15:27.780 "traddr": "10.0.0.3", 00:15:27.780 "adrfam": "ipv4", 00:15:27.780 "trsvcid": "4420", 00:15:27.780 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:27.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:27.780 "prchk_reftag": false, 00:15:27.780 "prchk_guard": false, 00:15:27.780 "hdgst": false, 00:15:27.780 "ddgst": false, 00:15:27.780 "dhchap_key": "key3", 00:15:27.780 "allow_unrecognized_csi": false, 00:15:27.780 "method": "bdev_nvme_attach_controller", 00:15:27.780 "req_id": 1 00:15:27.780 } 00:15:27.780 Got JSON-RPC error response 00:15:27.780 response: 00:15:27.780 { 00:15:27.780 "code": -5, 00:15:27.780 "message": "Input/output error" 00:15:27.780 } 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:27.780 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.039 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.298 request: 00:15:28.298 { 00:15:28.298 "name": "nvme0", 00:15:28.298 "trtype": "tcp", 00:15:28.298 "traddr": "10.0.0.3", 00:15:28.298 "adrfam": "ipv4", 00:15:28.298 "trsvcid": "4420", 00:15:28.298 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:28.298 "prchk_reftag": false, 00:15:28.298 "prchk_guard": false, 00:15:28.298 "hdgst": false, 00:15:28.298 "ddgst": false, 00:15:28.298 "dhchap_key": "key3", 00:15:28.298 "allow_unrecognized_csi": false, 00:15:28.298 "method": "bdev_nvme_attach_controller", 00:15:28.298 "req_id": 1 00:15:28.298 } 00:15:28.298 Got JSON-RPC error response 00:15:28.298 response: 00:15:28.298 { 00:15:28.298 "code": -5, 00:15:28.298 "message": "Input/output error" 00:15:28.298 } 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.298 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.557 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:28.557 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.557 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.557 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.557 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.558 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.124 request: 00:15:29.124 { 00:15:29.124 "name": "nvme0", 00:15:29.124 "trtype": "tcp", 00:15:29.124 "traddr": "10.0.0.3", 00:15:29.124 "adrfam": "ipv4", 00:15:29.124 "trsvcid": "4420", 00:15:29.124 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:29.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:29.124 "prchk_reftag": false, 00:15:29.124 "prchk_guard": false, 00:15:29.124 "hdgst": false, 00:15:29.124 "ddgst": false, 00:15:29.124 "dhchap_key": "key0", 00:15:29.124 "dhchap_ctrlr_key": "key1", 00:15:29.124 "allow_unrecognized_csi": false, 00:15:29.124 "method": "bdev_nvme_attach_controller", 00:15:29.124 "req_id": 1 00:15:29.124 } 00:15:29.124 Got JSON-RPC error response 00:15:29.124 response: 00:15:29.124 { 00:15:29.124 "code": -5, 00:15:29.124 "message": "Input/output error" 00:15:29.124 } 00:15:29.124 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:29.124 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.124 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.124 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.124 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:29.124 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:29.124 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:29.382 nvme0n1 00:15:29.382 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:29.382 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:29.382 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.641 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.641 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.641 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.899 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 00:15:29.899 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.899 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.899 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.899 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:29.899 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:29.899 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:30.835 nvme0n1 00:15:30.835 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:30.835 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:30.835 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.402 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:31.661 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.661 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:31.661 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid 8c073979-9b92-4972-b56b-796474446288 -l 0 --dhchap-secret DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: --dhchap-ctrl-secret DHHC-1:03:YTg5YmZhOWI5YWNkZGZlOGFhNDU3MGJkYTI1M2E5ZmIwYzU5NDM0ZWE5N2U4MDk1YjA4YTk3NWY0MzRhM2YxNgYQGqw=: 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.228 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.505 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:32.505 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:32.505 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:32.505 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:32.505 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.506 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:32.506 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.506 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:32.506 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:32.506 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:33.081 request: 00:15:33.081 { 00:15:33.081 "name": "nvme0", 00:15:33.081 "trtype": "tcp", 00:15:33.081 "traddr": "10.0.0.3", 00:15:33.081 "adrfam": "ipv4", 00:15:33.081 "trsvcid": "4420", 00:15:33.081 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:33.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288", 00:15:33.081 "prchk_reftag": false, 00:15:33.081 "prchk_guard": false, 00:15:33.081 "hdgst": false, 00:15:33.081 "ddgst": false, 00:15:33.081 "dhchap_key": "key1", 00:15:33.081 "allow_unrecognized_csi": false, 00:15:33.081 "method": "bdev_nvme_attach_controller", 00:15:33.081 "req_id": 1 00:15:33.081 } 00:15:33.081 Got JSON-RPC error response 00:15:33.081 response: 00:15:33.081 { 00:15:33.081 "code": -5, 00:15:33.082 "message": "Input/output error" 00:15:33.082 } 00:15:33.082 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:33.082 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.082 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.082 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.082 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.082 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.082 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:34.017 nvme0n1 00:15:34.017 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:34.017 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.017 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:34.275 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.275 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.275 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.533 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:34.533 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.533 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.533 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.533 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:34.533 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:34.533 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:34.792 nvme0n1 00:15:34.792 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:34.792 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:34.792 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.050 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.050 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.050 17:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: '' 2s 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: ]] 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWJiMjZhYjllODg5NDVkNDVmMjIwZTMyYjkwZDMzYTKJvR25: 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:35.309 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: 2s 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: ]] 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWE1ZDlkMzRmYjI0Yjc4YTk4Yzc4MTU4ZDY2ZDVjODYxZDgyOGNiZWVlOGQ5ZTcwXUARhw==: 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:37.841 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.744 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:40.679 nvme0n1 00:15:40.679 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:40.679 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.679 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.679 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.679 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:40.679 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.246 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:41.246 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.246 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:41.511 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.511 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:41.511 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.511 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.511 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.511 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:41.511 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:41.791 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:41.791 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.791 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.050 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.619 request: 00:15:42.619 { 00:15:42.619 "name": "nvme0", 00:15:42.619 "dhchap_key": "key1", 00:15:42.619 "dhchap_ctrlr_key": "key3", 00:15:42.619 "method": "bdev_nvme_set_keys", 00:15:42.619 "req_id": 1 00:15:42.619 } 00:15:42.619 Got JSON-RPC error response 00:15:42.619 response: 00:15:42.619 { 00:15:42.619 "code": -13, 00:15:42.619 "message": "Permission denied" 00:15:42.619 } 00:15:42.619 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:42.619 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:42.619 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:42.619 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:42.619 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:42.619 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:42.619 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.878 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:42.878 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:44.255 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.191 nvme0n1 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.191 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:46.127 request: 00:15:46.127 { 00:15:46.127 "name": "nvme0", 00:15:46.127 "dhchap_key": "key2", 00:15:46.127 "dhchap_ctrlr_key": "key0", 00:15:46.127 "method": "bdev_nvme_set_keys", 00:15:46.127 "req_id": 1 00:15:46.127 } 00:15:46.127 Got JSON-RPC error response 00:15:46.127 response: 00:15:46.127 { 00:15:46.127 "code": -13, 00:15:46.127 "message": "Permission denied" 00:15:46.127 } 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:46.127 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:47.505 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:47.505 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:47.505 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67200 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67200 ']' 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67200 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67200 00:15:47.505 killing process with pid 67200 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67200' 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67200 00:15:47.505 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67200 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:48.073 rmmod nvme_tcp 00:15:48.073 rmmod nvme_fabrics 00:15:48.073 rmmod nvme_keyring 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70269 ']' 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70269 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70269 ']' 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70269 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70269 00:15:48.073 killing process with pid 70269 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70269' 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70269 00:15:48.073 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70269 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:48.332 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:48.332 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.MUs /tmp/spdk.key-sha256.xbx /tmp/spdk.key-sha384.xad /tmp/spdk.key-sha512.EMm /tmp/spdk.key-sha512.Moq /tmp/spdk.key-sha384.iO5 /tmp/spdk.key-sha256.9bU '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:48.592 ************************************ 00:15:48.592 END TEST nvmf_auth_target 00:15:48.592 ************************************ 00:15:48.592 00:15:48.592 real 3m12.451s 00:15:48.592 user 7m40.171s 00:15:48.592 sys 0m30.551s 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.592 ************************************ 00:15:48.592 START TEST nvmf_bdevio_no_huge 00:15:48.592 ************************************ 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:48.592 * Looking for test storage... 00:15:48.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:15:48.592 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:48.852 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.853 --rc genhtml_branch_coverage=1 00:15:48.853 --rc genhtml_function_coverage=1 00:15:48.853 --rc genhtml_legend=1 00:15:48.853 --rc geninfo_all_blocks=1 00:15:48.853 --rc geninfo_unexecuted_blocks=1 00:15:48.853 00:15:48.853 ' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.853 --rc genhtml_branch_coverage=1 00:15:48.853 --rc genhtml_function_coverage=1 00:15:48.853 --rc genhtml_legend=1 00:15:48.853 --rc geninfo_all_blocks=1 00:15:48.853 --rc geninfo_unexecuted_blocks=1 00:15:48.853 00:15:48.853 ' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.853 --rc genhtml_branch_coverage=1 00:15:48.853 --rc genhtml_function_coverage=1 00:15:48.853 --rc genhtml_legend=1 00:15:48.853 --rc geninfo_all_blocks=1 00:15:48.853 --rc geninfo_unexecuted_blocks=1 00:15:48.853 00:15:48.853 ' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:48.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.853 --rc genhtml_branch_coverage=1 00:15:48.853 --rc genhtml_function_coverage=1 00:15:48.853 --rc genhtml_legend=1 00:15:48.853 --rc geninfo_all_blocks=1 00:15:48.853 --rc geninfo_unexecuted_blocks=1 00:15:48.853 00:15:48.853 ' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.853 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.853 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.853 Cannot find device "nvmf_init_br" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.854 Cannot find device "nvmf_init_br2" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.854 Cannot find device "nvmf_tgt_br" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.854 Cannot find device "nvmf_tgt_br2" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.854 Cannot find device "nvmf_init_br" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.854 Cannot find device "nvmf_init_br2" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.854 Cannot find device "nvmf_tgt_br" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.854 Cannot find device "nvmf_tgt_br2" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.854 Cannot find device "nvmf_br" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.854 Cannot find device "nvmf_init_if" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.854 Cannot find device "nvmf_init_if2" 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.854 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:49.113 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:49.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:49.114 00:15:49.114 --- 10.0.0.3 ping statistics --- 00:15:49.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.114 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:49.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:49.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:49.114 00:15:49.114 --- 10.0.0.4 ping statistics --- 00:15:49.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.114 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:15:49.114 00:15:49.114 --- 10.0.0.1 ping statistics --- 00:15:49.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.114 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:49.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:49.114 00:15:49.114 --- 10.0.0.2 ping statistics --- 00:15:49.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.114 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70910 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70910 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 70910 ']' 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:49.114 17:16:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:49.114 [2024-11-04 17:16:49.912678] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:15:49.114 [2024-11-04 17:16:49.913060] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:49.373 [2024-11-04 17:16:50.086641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.373 [2024-11-04 17:16:50.167375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.373 [2024-11-04 17:16:50.167959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.373 [2024-11-04 17:16:50.168318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.373 [2024-11-04 17:16:50.168341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.373 [2024-11-04 17:16:50.168360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.373 [2024-11-04 17:16:50.169075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:49.373 [2024-11-04 17:16:50.169662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:49.373 [2024-11-04 17:16:50.169763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:49.373 [2024-11-04 17:16:50.169794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.638 [2024-11-04 17:16:50.176078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.225 [2024-11-04 17:16:50.952489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.225 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.226 Malloc0 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.226 [2024-11-04 17:16:50.996953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.226 17:16:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:50.226 { 00:15:50.226 "params": { 00:15:50.226 "name": "Nvme$subsystem", 00:15:50.226 "trtype": "$TEST_TRANSPORT", 00:15:50.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.226 "adrfam": "ipv4", 00:15:50.226 "trsvcid": "$NVMF_PORT", 00:15:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.226 "hdgst": ${hdgst:-false}, 00:15:50.226 "ddgst": ${ddgst:-false} 00:15:50.226 }, 00:15:50.226 "method": "bdev_nvme_attach_controller" 00:15:50.226 } 00:15:50.226 EOF 00:15:50.226 )") 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:50.226 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:50.226 "params": { 00:15:50.226 "name": "Nvme1", 00:15:50.226 "trtype": "tcp", 00:15:50.226 "traddr": "10.0.0.3", 00:15:50.226 "adrfam": "ipv4", 00:15:50.226 "trsvcid": "4420", 00:15:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.226 "hdgst": false, 00:15:50.226 "ddgst": false 00:15:50.226 }, 00:15:50.226 "method": "bdev_nvme_attach_controller" 00:15:50.226 }' 00:15:50.485 [2024-11-04 17:16:51.051384] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:15:50.485 [2024-11-04 17:16:51.051486] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70946 ] 00:15:50.485 [2024-11-04 17:16:51.204312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.744 [2024-11-04 17:16:51.287544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.744 [2024-11-04 17:16:51.287661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.744 [2024-11-04 17:16:51.287670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.744 [2024-11-04 17:16:51.302670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:50.744 I/O targets: 00:15:50.744 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:50.744 00:15:50.744 00:15:50.744 CUnit - A unit testing framework for C - Version 2.1-3 00:15:50.744 http://cunit.sourceforge.net/ 00:15:50.744 00:15:50.744 00:15:50.744 Suite: bdevio tests on: Nvme1n1 00:15:50.744 Test: blockdev write read block ...passed 00:15:50.744 Test: blockdev write zeroes read block ...passed 00:15:50.744 Test: blockdev write zeroes read no split ...passed 00:15:50.744 Test: blockdev write zeroes read split ...passed 00:15:51.004 Test: blockdev write zeroes read split partial ...passed 00:15:51.004 Test: blockdev reset ...[2024-11-04 17:16:51.547882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:51.004 [2024-11-04 17:16:51.548236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x682310 (9): Bad file descriptor 00:15:51.004 [2024-11-04 17:16:51.565637] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:51.004 passed 00:15:51.004 Test: blockdev write read 8 blocks ...passed 00:15:51.004 Test: blockdev write read size > 128k ...passed 00:15:51.004 Test: blockdev write read invalid size ...passed 00:15:51.004 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:51.004 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:51.004 Test: blockdev write read max offset ...passed 00:15:51.004 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:51.004 Test: blockdev writev readv 8 blocks ...passed 00:15:51.004 Test: blockdev writev readv 30 x 1block ...passed 00:15:51.004 Test: blockdev writev readv block ...passed 00:15:51.004 Test: blockdev writev readv size > 128k ...passed 00:15:51.004 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:51.004 Test: blockdev comparev and writev ...[2024-11-04 17:16:51.575167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.575388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.575417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.575429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.575709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.575727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.575743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.575753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.576014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.576030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.576046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.576056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.576339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.576357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.576374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:51.004 [2024-11-04 17:16:51.576384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:51.004 passed 00:15:51.004 Test: blockdev nvme passthru rw ...passed 00:15:51.004 Test: blockdev nvme passthru vendor specific ...[2024-11-04 17:16:51.577263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:51.004 [2024-11-04 17:16:51.577288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.577407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:51.004 [2024-11-04 17:16:51.577424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:51.004 passed 00:15:51.004 Test: blockdev nvme admin passthru ...[2024-11-04 17:16:51.577540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:51.004 [2024-11-04 17:16:51.577560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:51.004 [2024-11-04 17:16:51.577654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:51.004 [2024-11-04 17:16:51.577669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:51.004 passed 00:15:51.004 Test: blockdev copy ...passed 00:15:51.004 00:15:51.004 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.004 suites 1 1 n/a 0 0 00:15:51.004 tests 23 23 23 0 0 00:15:51.004 asserts 152 152 152 0 n/a 00:15:51.004 00:15:51.004 Elapsed time = 0.165 seconds 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.263 17:16:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.263 rmmod nvme_tcp 00:15:51.263 rmmod nvme_fabrics 00:15:51.263 rmmod nvme_keyring 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70910 ']' 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70910 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 70910 ']' 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 70910 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70910 00:15:51.263 killing process with pid 70910 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70910' 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 70910 00:15:51.263 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 70910 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.831 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:52.091 00:15:52.091 real 0m3.385s 00:15:52.091 user 0m10.349s 00:15:52.091 sys 0m1.404s 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:52.091 ************************************ 00:15:52.091 END TEST nvmf_bdevio_no_huge 00:15:52.091 ************************************ 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.091 ************************************ 00:15:52.091 START TEST nvmf_tls 00:15:52.091 ************************************ 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:52.091 * Looking for test storage... 00:15:52.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.091 --rc genhtml_branch_coverage=1 00:15:52.091 --rc genhtml_function_coverage=1 00:15:52.091 --rc genhtml_legend=1 00:15:52.091 --rc geninfo_all_blocks=1 00:15:52.091 --rc geninfo_unexecuted_blocks=1 00:15:52.091 00:15:52.091 ' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.091 --rc genhtml_branch_coverage=1 00:15:52.091 --rc genhtml_function_coverage=1 00:15:52.091 --rc genhtml_legend=1 00:15:52.091 --rc geninfo_all_blocks=1 00:15:52.091 --rc geninfo_unexecuted_blocks=1 00:15:52.091 00:15:52.091 ' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.091 --rc genhtml_branch_coverage=1 00:15:52.091 --rc genhtml_function_coverage=1 00:15:52.091 --rc genhtml_legend=1 00:15:52.091 --rc geninfo_all_blocks=1 00:15:52.091 --rc geninfo_unexecuted_blocks=1 00:15:52.091 00:15:52.091 ' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:52.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.091 --rc genhtml_branch_coverage=1 00:15:52.091 --rc genhtml_function_coverage=1 00:15:52.091 --rc genhtml_legend=1 00:15:52.091 --rc geninfo_all_blocks=1 00:15:52.091 --rc geninfo_unexecuted_blocks=1 00:15:52.091 00:15:52.091 ' 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.091 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.350 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.350 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.351 Cannot find device "nvmf_init_br" 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.351 Cannot find device "nvmf_init_br2" 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.351 Cannot find device "nvmf_tgt_br" 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.351 Cannot find device "nvmf_tgt_br2" 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.351 Cannot find device "nvmf_init_br" 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.351 Cannot find device "nvmf_init_br2" 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:52.351 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.351 Cannot find device "nvmf_tgt_br" 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.351 Cannot find device "nvmf_tgt_br2" 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.351 Cannot find device "nvmf_br" 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.351 Cannot find device "nvmf_init_if" 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.351 Cannot find device "nvmf_init_if2" 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.351 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:52.351 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:52.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:15:52.610 00:15:52.610 --- 10.0.0.3 ping statistics --- 00:15:52.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.610 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:52.610 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:52.610 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:52.610 00:15:52.610 --- 10.0.0.4 ping statistics --- 00:15:52.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.610 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:52.610 00:15:52.610 --- 10.0.0.1 ping statistics --- 00:15:52.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.610 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:15:52.610 00:15:52.610 --- 10.0.0.2 ping statistics --- 00:15:52.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.610 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71176 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71176 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71176 ']' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:52.610 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.610 [2024-11-04 17:16:53.371085] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:15:52.610 [2024-11-04 17:16:53.371173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.869 [2024-11-04 17:16:53.527907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.869 [2024-11-04 17:16:53.586359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.869 [2024-11-04 17:16:53.586412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.869 [2024-11-04 17:16:53.586426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.869 [2024-11-04 17:16:53.586436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.869 [2024-11-04 17:16:53.586445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.869 [2024-11-04 17:16:53.586898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:52.869 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:53.127 true 00:15:53.127 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.127 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:53.386 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:53.386 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:53.386 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:53.653 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:53.653 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:54.219 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:54.219 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:54.219 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:54.219 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.219 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:54.478 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:54.478 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:54.478 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.478 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:54.736 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:54.736 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:54.736 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:54.995 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:54.995 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:55.254 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:55.254 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:55.254 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:55.513 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:55.513 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:55.771 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.H9MDVO1fAi 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.FZl03bWcIu 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.H9MDVO1fAi 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.FZl03bWcIu 00:15:56.030 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:56.288 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:56.547 [2024-11-04 17:16:57.223751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.547 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.H9MDVO1fAi 00:15:56.547 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H9MDVO1fAi 00:15:56.547 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:56.805 [2024-11-04 17:16:57.501388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.805 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:57.063 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:57.320 [2024-11-04 17:16:57.997538] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:57.320 [2024-11-04 17:16:57.997807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:57.320 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:57.579 malloc0 00:15:57.579 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:57.838 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H9MDVO1fAi 00:15:58.097 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:58.355 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.H9MDVO1fAi 00:16:10.597 Initializing NVMe Controllers 00:16:10.597 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.597 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:10.597 Initialization complete. Launching workers. 00:16:10.597 ======================================================== 00:16:10.597 Latency(us) 00:16:10.597 Device Information : IOPS MiB/s Average min max 00:16:10.597 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10340.23 40.39 6190.56 1503.03 7810.25 00:16:10.597 ======================================================== 00:16:10.597 Total : 10340.23 40.39 6190.56 1503.03 7810.25 00:16:10.597 00:16:10.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H9MDVO1fAi 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H9MDVO1fAi 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71407 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71407 /var/tmp/bdevperf.sock 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71407 ']' 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:10.597 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.597 [2024-11-04 17:17:09.307783] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:10.597 [2024-11-04 17:17:09.308379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71407 ] 00:16:10.597 [2024-11-04 17:17:09.473976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.597 [2024-11-04 17:17:09.530121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.597 [2024-11-04 17:17:09.590695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:10.597 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:10.597 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:10.597 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H9MDVO1fAi 00:16:10.597 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:10.597 [2024-11-04 17:17:10.738549] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:10.597 TLSTESTn1 00:16:10.597 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:10.597 Running I/O for 10 seconds... 00:16:12.549 4310.00 IOPS, 16.84 MiB/s [2024-11-04T17:17:14.290Z] 4433.50 IOPS, 17.32 MiB/s [2024-11-04T17:17:15.224Z] 4440.33 IOPS, 17.35 MiB/s [2024-11-04T17:17:16.160Z] 4425.50 IOPS, 17.29 MiB/s [2024-11-04T17:17:17.096Z] 4405.00 IOPS, 17.21 MiB/s [2024-11-04T17:17:18.032Z] 4402.67 IOPS, 17.20 MiB/s [2024-11-04T17:17:18.967Z] 4404.43 IOPS, 17.20 MiB/s [2024-11-04T17:17:20.343Z] 4414.38 IOPS, 17.24 MiB/s [2024-11-04T17:17:21.279Z] 4419.67 IOPS, 17.26 MiB/s [2024-11-04T17:17:21.279Z] 4428.40 IOPS, 17.30 MiB/s 00:16:20.475 Latency(us) 00:16:20.475 [2024-11-04T17:17:21.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.475 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:20.475 Verification LBA range: start 0x0 length 0x2000 00:16:20.475 TLSTESTn1 : 10.02 4434.06 17.32 0.00 0.00 28815.10 5868.45 22520.55 00:16:20.475 [2024-11-04T17:17:21.279Z] =================================================================================================================== 00:16:20.475 [2024-11-04T17:17:21.279Z] Total : 4434.06 17.32 0.00 0.00 28815.10 5868.45 22520.55 00:16:20.475 { 00:16:20.475 "results": [ 00:16:20.475 { 00:16:20.475 "job": "TLSTESTn1", 00:16:20.475 "core_mask": "0x4", 00:16:20.475 "workload": "verify", 00:16:20.475 "status": "finished", 00:16:20.475 "verify_range": { 00:16:20.475 "start": 0, 00:16:20.475 "length": 8192 00:16:20.475 }, 00:16:20.475 "queue_depth": 128, 00:16:20.475 "io_size": 4096, 00:16:20.475 "runtime": 10.015885, 00:16:20.475 "iops": 4434.056501247768, 00:16:20.475 "mibps": 17.320533207999095, 00:16:20.475 "io_failed": 0, 00:16:20.475 "io_timeout": 0, 00:16:20.475 "avg_latency_us": 28815.104818011918, 00:16:20.475 "min_latency_us": 5868.450909090909, 00:16:20.475 "max_latency_us": 22520.552727272727 00:16:20.475 } 00:16:20.475 ], 00:16:20.475 "core_count": 1 00:16:20.475 } 00:16:20.475 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:20.475 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71407 00:16:20.475 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71407 ']' 00:16:20.475 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71407 00:16:20.475 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:20.475 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:20.475 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71407 00:16:20.475 killing process with pid 71407 00:16:20.475 Received shutdown signal, test time was about 10.000000 seconds 00:16:20.475 00:16:20.475 Latency(us) 00:16:20.475 [2024-11-04T17:17:21.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.475 [2024-11-04T17:17:21.279Z] =================================================================================================================== 00:16:20.475 [2024-11-04T17:17:21.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.475 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:20.475 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71407' 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71407 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71407 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZl03bWcIu 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZl03bWcIu 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZl03bWcIu 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FZl03bWcIu 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71542 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71542 /var/tmp/bdevperf.sock 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71542 ']' 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:20.476 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.476 [2024-11-04 17:17:21.254093] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:20.476 [2024-11-04 17:17:21.254222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71542 ] 00:16:20.734 [2024-11-04 17:17:21.403414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.734 [2024-11-04 17:17:21.451484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.735 [2024-11-04 17:17:21.505667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:20.994 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:20.994 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:20.994 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FZl03bWcIu 00:16:21.253 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:21.513 [2024-11-04 17:17:22.072568] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.513 [2024-11-04 17:17:22.078940] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:21.513 [2024-11-04 17:17:22.079321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5fb0 (107): Transport endpoint is not connected 00:16:21.513 [2024-11-04 17:17:22.080313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c5fb0 (9): Bad file descriptor 00:16:21.513 [2024-11-04 17:17:22.081313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:21.513 [2024-11-04 17:17:22.081354] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:21.513 [2024-11-04 17:17:22.081364] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:21.513 [2024-11-04 17:17:22.081374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:21.513 request: 00:16:21.513 { 00:16:21.513 "name": "TLSTEST", 00:16:21.513 "trtype": "tcp", 00:16:21.513 "traddr": "10.0.0.3", 00:16:21.513 "adrfam": "ipv4", 00:16:21.513 "trsvcid": "4420", 00:16:21.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.513 "prchk_reftag": false, 00:16:21.513 "prchk_guard": false, 00:16:21.513 "hdgst": false, 00:16:21.513 "ddgst": false, 00:16:21.513 "psk": "key0", 00:16:21.513 "allow_unrecognized_csi": false, 00:16:21.513 "method": "bdev_nvme_attach_controller", 00:16:21.513 "req_id": 1 00:16:21.513 } 00:16:21.513 Got JSON-RPC error response 00:16:21.513 response: 00:16:21.513 { 00:16:21.513 "code": -5, 00:16:21.513 "message": "Input/output error" 00:16:21.513 } 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71542 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71542 ']' 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71542 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71542 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71542' 00:16:21.513 killing process with pid 71542 00:16:21.513 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71542 00:16:21.513 Received shutdown signal, test time was about 10.000000 seconds 00:16:21.513 00:16:21.513 Latency(us) 00:16:21.513 [2024-11-04T17:17:22.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.513 [2024-11-04T17:17:22.317Z] =================================================================================================================== 00:16:21.513 [2024-11-04T17:17:22.317Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:21.514 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71542 00:16:21.514 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H9MDVO1fAi 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H9MDVO1fAi 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.H9MDVO1fAi 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H9MDVO1fAi 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71563 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71563 /var/tmp/bdevperf.sock 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71563 ']' 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:21.773 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.773 [2024-11-04 17:17:22.364834] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:21.773 [2024-11-04 17:17:22.364940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71563 ] 00:16:21.773 [2024-11-04 17:17:22.507112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.773 [2024-11-04 17:17:22.555987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.032 [2024-11-04 17:17:22.611486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:22.032 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:22.032 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:22.032 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H9MDVO1fAi 00:16:22.290 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:22.550 [2024-11-04 17:17:23.165605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:22.550 [2024-11-04 17:17:23.170732] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:22.550 [2024-11-04 17:17:23.170771] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:22.550 [2024-11-04 17:17:23.170825] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:22.550 [2024-11-04 17:17:23.171447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b6fb0 (107): Transport endpoint is not connected 00:16:22.550 [2024-11-04 17:17:23.172434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b6fb0 (9): Bad file descriptor 00:16:22.550 [2024-11-04 17:17:23.173432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:22.550 [2024-11-04 17:17:23.173462] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:22.550 [2024-11-04 17:17:23.173472] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:22.550 [2024-11-04 17:17:23.173484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:22.550 request: 00:16:22.550 { 00:16:22.550 "name": "TLSTEST", 00:16:22.550 "trtype": "tcp", 00:16:22.550 "traddr": "10.0.0.3", 00:16:22.550 "adrfam": "ipv4", 00:16:22.550 "trsvcid": "4420", 00:16:22.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.550 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:22.550 "prchk_reftag": false, 00:16:22.550 "prchk_guard": false, 00:16:22.550 "hdgst": false, 00:16:22.550 "ddgst": false, 00:16:22.550 "psk": "key0", 00:16:22.550 "allow_unrecognized_csi": false, 00:16:22.550 "method": "bdev_nvme_attach_controller", 00:16:22.550 "req_id": 1 00:16:22.550 } 00:16:22.550 Got JSON-RPC error response 00:16:22.550 response: 00:16:22.550 { 00:16:22.550 "code": -5, 00:16:22.550 "message": "Input/output error" 00:16:22.550 } 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71563 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71563 ']' 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71563 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71563 00:16:22.550 killing process with pid 71563 00:16:22.550 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.550 00:16:22.550 Latency(us) 00:16:22.550 [2024-11-04T17:17:23.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.550 [2024-11-04T17:17:23.354Z] =================================================================================================================== 00:16:22.550 [2024-11-04T17:17:23.354Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71563' 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71563 00:16:22.550 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71563 00:16:22.808 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:22.808 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:22.808 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.808 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.808 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.808 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H9MDVO1fAi 00:16:22.808 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H9MDVO1fAi 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:22.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.H9MDVO1fAi 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H9MDVO1fAi 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71585 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71585 /var/tmp/bdevperf.sock 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71585 ']' 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:22.809 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.809 [2024-11-04 17:17:23.455223] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:22.809 [2024-11-04 17:17:23.455348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71585 ] 00:16:22.809 [2024-11-04 17:17:23.600778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.067 [2024-11-04 17:17:23.657273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.067 [2024-11-04 17:17:23.712145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.067 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:23.067 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:23.067 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H9MDVO1fAi 00:16:23.325 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:23.586 [2024-11-04 17:17:24.289800] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:23.586 [2024-11-04 17:17:24.299096] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:23.586 [2024-11-04 17:17:24.299147] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:23.586 [2024-11-04 17:17:24.299206] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:23.586 [2024-11-04 17:17:24.299249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8efb0 (107): Transport endpoint is not connected 00:16:23.586 [2024-11-04 17:17:24.300256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8efb0 (9): Bad file descriptor 00:16:23.586 [2024-11-04 17:17:24.301238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:23.586 [2024-11-04 17:17:24.301282] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:23.586 [2024-11-04 17:17:24.301292] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:23.586 [2024-11-04 17:17:24.301302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:23.586 request: 00:16:23.586 { 00:16:23.586 "name": "TLSTEST", 00:16:23.586 "trtype": "tcp", 00:16:23.586 "traddr": "10.0.0.3", 00:16:23.586 "adrfam": "ipv4", 00:16:23.586 "trsvcid": "4420", 00:16:23.586 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:23.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.586 "prchk_reftag": false, 00:16:23.586 "prchk_guard": false, 00:16:23.586 "hdgst": false, 00:16:23.586 "ddgst": false, 00:16:23.586 "psk": "key0", 00:16:23.586 "allow_unrecognized_csi": false, 00:16:23.586 "method": "bdev_nvme_attach_controller", 00:16:23.586 "req_id": 1 00:16:23.586 } 00:16:23.586 Got JSON-RPC error response 00:16:23.586 response: 00:16:23.586 { 00:16:23.586 "code": -5, 00:16:23.586 "message": "Input/output error" 00:16:23.586 } 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71585 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71585 ']' 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71585 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71585 00:16:23.586 killing process with pid 71585 00:16:23.586 Received shutdown signal, test time was about 10.000000 seconds 00:16:23.586 00:16:23.586 Latency(us) 00:16:23.586 [2024-11-04T17:17:24.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.586 [2024-11-04T17:17:24.390Z] =================================================================================================================== 00:16:23.586 [2024-11-04T17:17:24.390Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71585' 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71585 00:16:23.586 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71585 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71606 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71606 /var/tmp/bdevperf.sock 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71606 ']' 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:23.855 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.855 [2024-11-04 17:17:24.586456] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:23.855 [2024-11-04 17:17:24.586567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71606 ] 00:16:24.114 [2024-11-04 17:17:24.726086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.114 [2024-11-04 17:17:24.783158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.114 [2024-11-04 17:17:24.837761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.050 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:25.050 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:25.050 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:25.050 [2024-11-04 17:17:25.767582] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:25.050 [2024-11-04 17:17:25.767638] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:25.050 request: 00:16:25.050 { 00:16:25.050 "name": "key0", 00:16:25.050 "path": "", 00:16:25.050 "method": "keyring_file_add_key", 00:16:25.050 "req_id": 1 00:16:25.050 } 00:16:25.050 Got JSON-RPC error response 00:16:25.050 response: 00:16:25.050 { 00:16:25.050 "code": -1, 00:16:25.050 "message": "Operation not permitted" 00:16:25.050 } 00:16:25.050 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.309 [2024-11-04 17:17:26.023803] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:25.309 [2024-11-04 17:17:26.023895] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:25.309 request: 00:16:25.309 { 00:16:25.309 "name": "TLSTEST", 00:16:25.309 "trtype": "tcp", 00:16:25.309 "traddr": "10.0.0.3", 00:16:25.309 "adrfam": "ipv4", 00:16:25.309 "trsvcid": "4420", 00:16:25.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.309 "prchk_reftag": false, 00:16:25.309 "prchk_guard": false, 00:16:25.309 "hdgst": false, 00:16:25.309 "ddgst": false, 00:16:25.309 "psk": "key0", 00:16:25.309 "allow_unrecognized_csi": false, 00:16:25.309 "method": "bdev_nvme_attach_controller", 00:16:25.309 "req_id": 1 00:16:25.309 } 00:16:25.309 Got JSON-RPC error response 00:16:25.309 response: 00:16:25.309 { 00:16:25.309 "code": -126, 00:16:25.309 "message": "Required key not available" 00:16:25.309 } 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71606 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71606 ']' 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71606 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71606 00:16:25.309 killing process with pid 71606 00:16:25.309 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.309 00:16:25.309 Latency(us) 00:16:25.309 [2024-11-04T17:17:26.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.309 [2024-11-04T17:17:26.113Z] =================================================================================================================== 00:16:25.309 [2024-11-04T17:17:26.113Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71606' 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71606 00:16:25.309 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71606 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71176 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71176 ']' 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71176 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71176 00:16:25.568 killing process with pid 71176 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71176' 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71176 00:16:25.568 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71176 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.6sFWvfQYsR 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.6sFWvfQYsR 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71650 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71650 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71650 ']' 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.827 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.827 [2024-11-04 17:17:26.602955] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:25.827 [2024-11-04 17:17:26.603073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.086 [2024-11-04 17:17:26.744951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.086 [2024-11-04 17:17:26.794137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.086 [2024-11-04 17:17:26.794198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.086 [2024-11-04 17:17:26.794208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.086 [2024-11-04 17:17:26.794214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.086 [2024-11-04 17:17:26.794250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.086 [2024-11-04 17:17:26.794657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.086 [2024-11-04 17:17:26.851876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.6sFWvfQYsR 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6sFWvfQYsR 00:16:27.020 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:27.020 [2024-11-04 17:17:27.819022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.278 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:27.278 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:27.536 [2024-11-04 17:17:28.283161] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.536 [2024-11-04 17:17:28.283382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:27.536 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:27.795 malloc0 00:16:27.795 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:28.053 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:16:28.320 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:28.600 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6sFWvfQYsR 00:16:28.600 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:28.600 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:28.600 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:28.600 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6sFWvfQYsR 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71704 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71704 /var/tmp/bdevperf.sock 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71704 ']' 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:28.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:28.601 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.601 [2024-11-04 17:17:29.353754] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:28.601 [2024-11-04 17:17:29.353876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71704 ] 00:16:28.859 [2024-11-04 17:17:29.512425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.859 [2024-11-04 17:17:29.575298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.859 [2024-11-04 17:17:29.633579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.793 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:29.793 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:29.793 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:16:29.793 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:30.051 [2024-11-04 17:17:30.787326] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:30.309 TLSTESTn1 00:16:30.309 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:30.309 Running I/O for 10 seconds... 00:16:32.621 4189.00 IOPS, 16.36 MiB/s [2024-11-04T17:17:34.032Z] 4325.00 IOPS, 16.89 MiB/s [2024-11-04T17:17:35.410Z] 4351.33 IOPS, 17.00 MiB/s [2024-11-04T17:17:36.345Z] 4347.50 IOPS, 16.98 MiB/s [2024-11-04T17:17:37.281Z] 4341.40 IOPS, 16.96 MiB/s [2024-11-04T17:17:38.217Z] 4347.17 IOPS, 16.98 MiB/s [2024-11-04T17:17:39.152Z] 4352.29 IOPS, 17.00 MiB/s [2024-11-04T17:17:40.088Z] 4354.62 IOPS, 17.01 MiB/s [2024-11-04T17:17:41.023Z] 4358.67 IOPS, 17.03 MiB/s [2024-11-04T17:17:41.282Z] 4362.40 IOPS, 17.04 MiB/s 00:16:40.478 Latency(us) 00:16:40.478 [2024-11-04T17:17:41.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.478 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:40.478 Verification LBA range: start 0x0 length 0x2000 00:16:40.478 TLSTESTn1 : 10.02 4368.29 17.06 0.00 0.00 29250.29 5332.25 22758.87 00:16:40.478 [2024-11-04T17:17:41.282Z] =================================================================================================================== 00:16:40.478 [2024-11-04T17:17:41.282Z] Total : 4368.29 17.06 0.00 0.00 29250.29 5332.25 22758.87 00:16:40.478 { 00:16:40.478 "results": [ 00:16:40.478 { 00:16:40.478 "job": "TLSTESTn1", 00:16:40.478 "core_mask": "0x4", 00:16:40.478 "workload": "verify", 00:16:40.478 "status": "finished", 00:16:40.478 "verify_range": { 00:16:40.478 "start": 0, 00:16:40.478 "length": 8192 00:16:40.478 }, 00:16:40.478 "queue_depth": 128, 00:16:40.478 "io_size": 4096, 00:16:40.478 "runtime": 10.015354, 00:16:40.478 "iops": 4368.292923045955, 00:16:40.478 "mibps": 17.06364423064826, 00:16:40.478 "io_failed": 0, 00:16:40.478 "io_timeout": 0, 00:16:40.478 "avg_latency_us": 29250.29068534026, 00:16:40.478 "min_latency_us": 5332.2472727272725, 00:16:40.478 "max_latency_us": 22758.865454545456 00:16:40.478 } 00:16:40.478 ], 00:16:40.478 "core_count": 1 00:16:40.478 } 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71704 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71704 ']' 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71704 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71704 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:40.478 killing process with pid 71704 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71704' 00:16:40.478 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71704 00:16:40.478 Received shutdown signal, test time was about 10.000000 seconds 00:16:40.478 00:16:40.479 Latency(us) 00:16:40.479 [2024-11-04T17:17:41.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.479 [2024-11-04T17:17:41.283Z] =================================================================================================================== 00:16:40.479 [2024-11-04T17:17:41.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.479 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71704 00:16:40.479 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.6sFWvfQYsR 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6sFWvfQYsR 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6sFWvfQYsR 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6sFWvfQYsR 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6sFWvfQYsR 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71841 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71841 /var/tmp/bdevperf.sock 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71841 ']' 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:40.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:40.738 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.738 [2024-11-04 17:17:41.344602] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:40.738 [2024-11-04 17:17:41.344716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71841 ] 00:16:40.738 [2024-11-04 17:17:41.490364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.738 [2024-11-04 17:17:41.536124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.997 [2024-11-04 17:17:41.590589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.997 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:40.997 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:40.998 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:16:41.256 [2024-11-04 17:17:41.931736] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6sFWvfQYsR': 0100666 00:16:41.256 [2024-11-04 17:17:41.931787] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:41.256 request: 00:16:41.256 { 00:16:41.256 "name": "key0", 00:16:41.256 "path": "/tmp/tmp.6sFWvfQYsR", 00:16:41.256 "method": "keyring_file_add_key", 00:16:41.256 "req_id": 1 00:16:41.256 } 00:16:41.256 Got JSON-RPC error response 00:16:41.256 response: 00:16:41.256 { 00:16:41.256 "code": -1, 00:16:41.256 "message": "Operation not permitted" 00:16:41.256 } 00:16:41.256 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:41.516 [2024-11-04 17:17:42.227896] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:41.516 [2024-11-04 17:17:42.227971] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:41.516 request: 00:16:41.516 { 00:16:41.516 "name": "TLSTEST", 00:16:41.516 "trtype": "tcp", 00:16:41.516 "traddr": "10.0.0.3", 00:16:41.516 "adrfam": "ipv4", 00:16:41.516 "trsvcid": "4420", 00:16:41.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.516 "prchk_reftag": false, 00:16:41.516 "prchk_guard": false, 00:16:41.516 "hdgst": false, 00:16:41.516 "ddgst": false, 00:16:41.516 "psk": "key0", 00:16:41.516 "allow_unrecognized_csi": false, 00:16:41.516 "method": "bdev_nvme_attach_controller", 00:16:41.516 "req_id": 1 00:16:41.516 } 00:16:41.516 Got JSON-RPC error response 00:16:41.516 response: 00:16:41.516 { 00:16:41.516 "code": -126, 00:16:41.516 "message": "Required key not available" 00:16:41.516 } 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71841 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71841 ']' 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71841 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71841 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:41.516 killing process with pid 71841 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:41.516 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71841' 00:16:41.516 Received shutdown signal, test time was about 10.000000 seconds 00:16:41.516 00:16:41.516 Latency(us) 00:16:41.516 [2024-11-04T17:17:42.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.516 [2024-11-04T17:17:42.320Z] =================================================================================================================== 00:16:41.516 [2024-11-04T17:17:42.320Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:41.517 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71841 00:16:41.517 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71841 00:16:41.776 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:41.776 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:41.776 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71650 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71650 ']' 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71650 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71650 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:41.777 killing process with pid 71650 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71650' 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71650 00:16:41.777 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71650 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71873 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71873 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71873 ']' 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:42.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:42.036 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.036 [2024-11-04 17:17:42.751127] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:42.036 [2024-11-04 17:17:42.751195] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.294 [2024-11-04 17:17:42.889133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.294 [2024-11-04 17:17:42.942723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.294 [2024-11-04 17:17:42.942772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.295 [2024-11-04 17:17:42.942783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.295 [2024-11-04 17:17:42.942792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.295 [2024-11-04 17:17:42.942799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.295 [2024-11-04 17:17:42.943210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.295 [2024-11-04 17:17:42.997797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.6sFWvfQYsR 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6sFWvfQYsR 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.6sFWvfQYsR 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6sFWvfQYsR 00:16:43.230 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:43.489 [2024-11-04 17:17:44.084177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.489 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:43.748 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:44.006 [2024-11-04 17:17:44.592260] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:44.006 [2024-11-04 17:17:44.592493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:44.006 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:44.265 malloc0 00:16:44.265 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:44.523 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:16:44.523 [2024-11-04 17:17:45.323489] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6sFWvfQYsR': 0100666 00:16:44.523 [2024-11-04 17:17:45.323563] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:44.782 request: 00:16:44.782 { 00:16:44.782 "name": "key0", 00:16:44.782 "path": "/tmp/tmp.6sFWvfQYsR", 00:16:44.782 "method": "keyring_file_add_key", 00:16:44.782 "req_id": 1 00:16:44.782 } 00:16:44.782 Got JSON-RPC error response 00:16:44.782 response: 00:16:44.782 { 00:16:44.782 "code": -1, 00:16:44.782 "message": "Operation not permitted" 00:16:44.782 } 00:16:44.782 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:44.782 [2024-11-04 17:17:45.563601] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:44.782 [2024-11-04 17:17:45.563678] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:44.782 request: 00:16:44.782 { 00:16:44.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.782 "host": "nqn.2016-06.io.spdk:host1", 00:16:44.782 "psk": "key0", 00:16:44.782 "method": "nvmf_subsystem_add_host", 00:16:44.782 "req_id": 1 00:16:44.782 } 00:16:44.782 Got JSON-RPC error response 00:16:44.782 response: 00:16:44.782 { 00:16:44.782 "code": -32603, 00:16:44.782 "message": "Internal error" 00:16:44.782 } 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71873 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71873 ']' 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71873 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71873 00:16:45.040 killing process with pid 71873 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71873' 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71873 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71873 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.6sFWvfQYsR 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71942 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71942 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71942 ']' 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:45.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:45.040 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.299 [2024-11-04 17:17:45.886302] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:45.299 [2024-11-04 17:17:45.886392] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.299 [2024-11-04 17:17:46.034797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.299 [2024-11-04 17:17:46.082227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.299 [2024-11-04 17:17:46.082301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.299 [2024-11-04 17:17:46.082327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.299 [2024-11-04 17:17:46.082335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.299 [2024-11-04 17:17:46.082342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.299 [2024-11-04 17:17:46.082737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.557 [2024-11-04 17:17:46.134726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.6sFWvfQYsR 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6sFWvfQYsR 00:16:45.557 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:45.815 [2024-11-04 17:17:46.511360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.815 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:46.074 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:46.332 [2024-11-04 17:17:47.083534] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:46.332 [2024-11-04 17:17:47.083770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:46.332 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:46.590 malloc0 00:16:46.590 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:46.854 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:16:47.112 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71990 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71990 /var/tmp/bdevperf.sock 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71990 ']' 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:47.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:47.371 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.371 [2024-11-04 17:17:48.149929] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:47.371 [2024-11-04 17:17:48.150051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71990 ] 00:16:47.630 [2024-11-04 17:17:48.298576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.630 [2024-11-04 17:17:48.360855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.630 [2024-11-04 17:17:48.418933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.889 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:47.889 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:47.889 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:16:48.148 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:48.148 [2024-11-04 17:17:48.937013] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.406 TLSTESTn1 00:16:48.406 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:48.666 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:48.666 "subsystems": [ 00:16:48.666 { 00:16:48.666 "subsystem": "keyring", 00:16:48.666 "config": [ 00:16:48.666 { 00:16:48.666 "method": "keyring_file_add_key", 00:16:48.666 "params": { 00:16:48.666 "name": "key0", 00:16:48.666 "path": "/tmp/tmp.6sFWvfQYsR" 00:16:48.666 } 00:16:48.666 } 00:16:48.666 ] 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "subsystem": "iobuf", 00:16:48.666 "config": [ 00:16:48.666 { 00:16:48.666 "method": "iobuf_set_options", 00:16:48.666 "params": { 00:16:48.666 "small_pool_count": 8192, 00:16:48.666 "large_pool_count": 1024, 00:16:48.666 "small_bufsize": 8192, 00:16:48.666 "large_bufsize": 135168, 00:16:48.666 "enable_numa": false 00:16:48.666 } 00:16:48.666 } 00:16:48.666 ] 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "subsystem": "sock", 00:16:48.666 "config": [ 00:16:48.666 { 00:16:48.666 "method": "sock_set_default_impl", 00:16:48.666 "params": { 00:16:48.666 "impl_name": "uring" 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "method": "sock_impl_set_options", 00:16:48.666 "params": { 00:16:48.666 "impl_name": "ssl", 00:16:48.666 "recv_buf_size": 4096, 00:16:48.666 "send_buf_size": 4096, 00:16:48.666 "enable_recv_pipe": true, 00:16:48.666 "enable_quickack": false, 00:16:48.666 "enable_placement_id": 0, 00:16:48.666 "enable_zerocopy_send_server": true, 00:16:48.666 "enable_zerocopy_send_client": false, 00:16:48.666 "zerocopy_threshold": 0, 00:16:48.666 "tls_version": 0, 00:16:48.666 "enable_ktls": false 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "method": "sock_impl_set_options", 00:16:48.666 "params": { 00:16:48.666 "impl_name": "posix", 00:16:48.666 "recv_buf_size": 2097152, 00:16:48.666 "send_buf_size": 2097152, 00:16:48.666 "enable_recv_pipe": true, 00:16:48.666 "enable_quickack": false, 00:16:48.666 "enable_placement_id": 0, 00:16:48.666 "enable_zerocopy_send_server": true, 00:16:48.666 "enable_zerocopy_send_client": false, 00:16:48.666 "zerocopy_threshold": 0, 00:16:48.666 "tls_version": 0, 00:16:48.666 "enable_ktls": false 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "method": "sock_impl_set_options", 00:16:48.666 "params": { 00:16:48.666 "impl_name": "uring", 00:16:48.666 "recv_buf_size": 2097152, 00:16:48.666 "send_buf_size": 2097152, 00:16:48.666 "enable_recv_pipe": true, 00:16:48.666 "enable_quickack": false, 00:16:48.666 "enable_placement_id": 0, 00:16:48.666 "enable_zerocopy_send_server": false, 00:16:48.666 "enable_zerocopy_send_client": false, 00:16:48.666 "zerocopy_threshold": 0, 00:16:48.666 "tls_version": 0, 00:16:48.666 "enable_ktls": false 00:16:48.666 } 00:16:48.666 } 00:16:48.666 ] 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "subsystem": "vmd", 00:16:48.666 "config": [] 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "subsystem": "accel", 00:16:48.666 "config": [ 00:16:48.666 { 00:16:48.666 "method": "accel_set_options", 00:16:48.666 "params": { 00:16:48.666 "small_cache_size": 128, 00:16:48.666 "large_cache_size": 16, 00:16:48.666 "task_count": 2048, 00:16:48.666 "sequence_count": 2048, 00:16:48.666 "buf_count": 2048 00:16:48.666 } 00:16:48.666 } 00:16:48.666 ] 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "subsystem": "bdev", 00:16:48.666 "config": [ 00:16:48.666 { 00:16:48.666 "method": "bdev_set_options", 00:16:48.666 "params": { 00:16:48.666 "bdev_io_pool_size": 65535, 00:16:48.666 "bdev_io_cache_size": 256, 00:16:48.666 "bdev_auto_examine": true, 00:16:48.666 "iobuf_small_cache_size": 128, 00:16:48.666 "iobuf_large_cache_size": 16 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "method": "bdev_raid_set_options", 00:16:48.666 "params": { 00:16:48.666 "process_window_size_kb": 1024, 00:16:48.666 "process_max_bandwidth_mb_sec": 0 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "method": "bdev_iscsi_set_options", 00:16:48.666 "params": { 00:16:48.666 "timeout_sec": 30 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "method": "bdev_nvme_set_options", 00:16:48.666 "params": { 00:16:48.666 "action_on_timeout": "none", 00:16:48.666 "timeout_us": 0, 00:16:48.666 "timeout_admin_us": 0, 00:16:48.666 "keep_alive_timeout_ms": 10000, 00:16:48.666 "arbitration_burst": 0, 00:16:48.666 "low_priority_weight": 0, 00:16:48.666 "medium_priority_weight": 0, 00:16:48.666 "high_priority_weight": 0, 00:16:48.666 "nvme_adminq_poll_period_us": 10000, 00:16:48.666 "nvme_ioq_poll_period_us": 0, 00:16:48.666 "io_queue_requests": 0, 00:16:48.666 "delay_cmd_submit": true, 00:16:48.666 "transport_retry_count": 4, 00:16:48.666 "bdev_retry_count": 3, 00:16:48.666 "transport_ack_timeout": 0, 00:16:48.666 "ctrlr_loss_timeout_sec": 0, 00:16:48.666 "reconnect_delay_sec": 0, 00:16:48.666 "fast_io_fail_timeout_sec": 0, 00:16:48.666 "disable_auto_failback": false, 00:16:48.666 "generate_uuids": false, 00:16:48.666 "transport_tos": 0, 00:16:48.666 "nvme_error_stat": false, 00:16:48.666 "rdma_srq_size": 0, 00:16:48.666 "io_path_stat": false, 00:16:48.666 "allow_accel_sequence": false, 00:16:48.666 "rdma_max_cq_size": 0, 00:16:48.666 "rdma_cm_event_timeout_ms": 0, 00:16:48.666 "dhchap_digests": [ 00:16:48.666 "sha256", 00:16:48.666 "sha384", 00:16:48.666 "sha512" 00:16:48.666 ], 00:16:48.666 "dhchap_dhgroups": [ 00:16:48.666 "null", 00:16:48.666 "ffdhe2048", 00:16:48.666 "ffdhe3072", 00:16:48.666 "ffdhe4096", 00:16:48.666 "ffdhe6144", 00:16:48.666 "ffdhe8192" 00:16:48.666 ] 00:16:48.666 } 00:16:48.666 }, 00:16:48.666 { 00:16:48.666 "method": "bdev_nvme_set_hotplug", 00:16:48.667 "params": { 00:16:48.667 "period_us": 100000, 00:16:48.667 "enable": false 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "bdev_malloc_create", 00:16:48.667 "params": { 00:16:48.667 "name": "malloc0", 00:16:48.667 "num_blocks": 8192, 00:16:48.667 "block_size": 4096, 00:16:48.667 "physical_block_size": 4096, 00:16:48.667 "uuid": "5f8badd3-7b91-4220-90d8-cd7ca60bcc17", 00:16:48.667 "optimal_io_boundary": 0, 00:16:48.667 "md_size": 0, 00:16:48.667 "dif_type": 0, 00:16:48.667 "dif_is_head_of_md": false, 00:16:48.667 "dif_pi_format": 0 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "bdev_wait_for_examine" 00:16:48.667 } 00:16:48.667 ] 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "subsystem": "nbd", 00:16:48.667 "config": [] 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "subsystem": "scheduler", 00:16:48.667 "config": [ 00:16:48.667 { 00:16:48.667 "method": "framework_set_scheduler", 00:16:48.667 "params": { 00:16:48.667 "name": "static" 00:16:48.667 } 00:16:48.667 } 00:16:48.667 ] 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "subsystem": "nvmf", 00:16:48.667 "config": [ 00:16:48.667 { 00:16:48.667 "method": "nvmf_set_config", 00:16:48.667 "params": { 00:16:48.667 "discovery_filter": "match_any", 00:16:48.667 "admin_cmd_passthru": { 00:16:48.667 "identify_ctrlr": false 00:16:48.667 }, 00:16:48.667 "dhchap_digests": [ 00:16:48.667 "sha256", 00:16:48.667 "sha384", 00:16:48.667 "sha512" 00:16:48.667 ], 00:16:48.667 "dhchap_dhgroups": [ 00:16:48.667 "null", 00:16:48.667 "ffdhe2048", 00:16:48.667 "ffdhe3072", 00:16:48.667 "ffdhe4096", 00:16:48.667 "ffdhe6144", 00:16:48.667 "ffdhe8192" 00:16:48.667 ] 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "nvmf_set_max_subsystems", 00:16:48.667 "params": { 00:16:48.667 "max_subsystems": 1024 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "nvmf_set_crdt", 00:16:48.667 "params": { 00:16:48.667 "crdt1": 0, 00:16:48.667 "crdt2": 0, 00:16:48.667 "crdt3": 0 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "nvmf_create_transport", 00:16:48.667 "params": { 00:16:48.667 "trtype": "TCP", 00:16:48.667 "max_queue_depth": 128, 00:16:48.667 "max_io_qpairs_per_ctrlr": 127, 00:16:48.667 "in_capsule_data_size": 4096, 00:16:48.667 "max_io_size": 131072, 00:16:48.667 "io_unit_size": 131072, 00:16:48.667 "max_aq_depth": 128, 00:16:48.667 "num_shared_buffers": 511, 00:16:48.667 "buf_cache_size": 4294967295, 00:16:48.667 "dif_insert_or_strip": false, 00:16:48.667 "zcopy": false, 00:16:48.667 "c2h_success": false, 00:16:48.667 "sock_priority": 0, 00:16:48.667 "abort_timeout_sec": 1, 00:16:48.667 "ack_timeout": 0, 00:16:48.667 "data_wr_pool_size": 0 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "nvmf_create_subsystem", 00:16:48.667 "params": { 00:16:48.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.667 "allow_any_host": false, 00:16:48.667 "serial_number": "SPDK00000000000001", 00:16:48.667 "model_number": "SPDK bdev Controller", 00:16:48.667 "max_namespaces": 10, 00:16:48.667 "min_cntlid": 1, 00:16:48.667 "max_cntlid": 65519, 00:16:48.667 "ana_reporting": false 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "nvmf_subsystem_add_host", 00:16:48.667 "params": { 00:16:48.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.667 "host": "nqn.2016-06.io.spdk:host1", 00:16:48.667 "psk": "key0" 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "nvmf_subsystem_add_ns", 00:16:48.667 "params": { 00:16:48.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.667 "namespace": { 00:16:48.667 "nsid": 1, 00:16:48.667 "bdev_name": "malloc0", 00:16:48.667 "nguid": "5F8BADD37B91422090D8CD7CA60BCC17", 00:16:48.667 "uuid": "5f8badd3-7b91-4220-90d8-cd7ca60bcc17", 00:16:48.667 "no_auto_visible": false 00:16:48.667 } 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "method": "nvmf_subsystem_add_listener", 00:16:48.667 "params": { 00:16:48.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.667 "listen_address": { 00:16:48.667 "trtype": "TCP", 00:16:48.667 "adrfam": "IPv4", 00:16:48.667 "traddr": "10.0.0.3", 00:16:48.667 "trsvcid": "4420" 00:16:48.667 }, 00:16:48.667 "secure_channel": true 00:16:48.667 } 00:16:48.667 } 00:16:48.667 ] 00:16:48.667 } 00:16:48.667 ] 00:16:48.667 }' 00:16:48.667 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:48.926 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:48.926 "subsystems": [ 00:16:48.926 { 00:16:48.926 "subsystem": "keyring", 00:16:48.926 "config": [ 00:16:48.926 { 00:16:48.926 "method": "keyring_file_add_key", 00:16:48.926 "params": { 00:16:48.926 "name": "key0", 00:16:48.926 "path": "/tmp/tmp.6sFWvfQYsR" 00:16:48.926 } 00:16:48.926 } 00:16:48.926 ] 00:16:48.926 }, 00:16:48.926 { 00:16:48.926 "subsystem": "iobuf", 00:16:48.926 "config": [ 00:16:48.926 { 00:16:48.926 "method": "iobuf_set_options", 00:16:48.926 "params": { 00:16:48.926 "small_pool_count": 8192, 00:16:48.926 "large_pool_count": 1024, 00:16:48.926 "small_bufsize": 8192, 00:16:48.926 "large_bufsize": 135168, 00:16:48.926 "enable_numa": false 00:16:48.926 } 00:16:48.926 } 00:16:48.926 ] 00:16:48.926 }, 00:16:48.926 { 00:16:48.926 "subsystem": "sock", 00:16:48.926 "config": [ 00:16:48.926 { 00:16:48.926 "method": "sock_set_default_impl", 00:16:48.926 "params": { 00:16:48.926 "impl_name": "uring" 00:16:48.926 } 00:16:48.926 }, 00:16:48.926 { 00:16:48.926 "method": "sock_impl_set_options", 00:16:48.926 "params": { 00:16:48.926 "impl_name": "ssl", 00:16:48.926 "recv_buf_size": 4096, 00:16:48.926 "send_buf_size": 4096, 00:16:48.926 "enable_recv_pipe": true, 00:16:48.926 "enable_quickack": false, 00:16:48.926 "enable_placement_id": 0, 00:16:48.926 "enable_zerocopy_send_server": true, 00:16:48.926 "enable_zerocopy_send_client": false, 00:16:48.926 "zerocopy_threshold": 0, 00:16:48.926 "tls_version": 0, 00:16:48.926 "enable_ktls": false 00:16:48.926 } 00:16:48.926 }, 00:16:48.926 { 00:16:48.926 "method": "sock_impl_set_options", 00:16:48.926 "params": { 00:16:48.926 "impl_name": "posix", 00:16:48.926 "recv_buf_size": 2097152, 00:16:48.926 "send_buf_size": 2097152, 00:16:48.926 "enable_recv_pipe": true, 00:16:48.926 "enable_quickack": false, 00:16:48.926 "enable_placement_id": 0, 00:16:48.926 "enable_zerocopy_send_server": true, 00:16:48.926 "enable_zerocopy_send_client": false, 00:16:48.926 "zerocopy_threshold": 0, 00:16:48.926 "tls_version": 0, 00:16:48.926 "enable_ktls": false 00:16:48.926 } 00:16:48.926 }, 00:16:48.926 { 00:16:48.926 "method": "sock_impl_set_options", 00:16:48.926 "params": { 00:16:48.926 "impl_name": "uring", 00:16:48.926 "recv_buf_size": 2097152, 00:16:48.926 "send_buf_size": 2097152, 00:16:48.926 "enable_recv_pipe": true, 00:16:48.926 "enable_quickack": false, 00:16:48.926 "enable_placement_id": 0, 00:16:48.927 "enable_zerocopy_send_server": false, 00:16:48.927 "enable_zerocopy_send_client": false, 00:16:48.927 "zerocopy_threshold": 0, 00:16:48.927 "tls_version": 0, 00:16:48.927 "enable_ktls": false 00:16:48.927 } 00:16:48.927 } 00:16:48.927 ] 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "subsystem": "vmd", 00:16:48.927 "config": [] 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "subsystem": "accel", 00:16:48.927 "config": [ 00:16:48.927 { 00:16:48.927 "method": "accel_set_options", 00:16:48.927 "params": { 00:16:48.927 "small_cache_size": 128, 00:16:48.927 "large_cache_size": 16, 00:16:48.927 "task_count": 2048, 00:16:48.927 "sequence_count": 2048, 00:16:48.927 "buf_count": 2048 00:16:48.927 } 00:16:48.927 } 00:16:48.927 ] 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "subsystem": "bdev", 00:16:48.927 "config": [ 00:16:48.927 { 00:16:48.927 "method": "bdev_set_options", 00:16:48.927 "params": { 00:16:48.927 "bdev_io_pool_size": 65535, 00:16:48.927 "bdev_io_cache_size": 256, 00:16:48.927 "bdev_auto_examine": true, 00:16:48.927 "iobuf_small_cache_size": 128, 00:16:48.927 "iobuf_large_cache_size": 16 00:16:48.927 } 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "method": "bdev_raid_set_options", 00:16:48.927 "params": { 00:16:48.927 "process_window_size_kb": 1024, 00:16:48.927 "process_max_bandwidth_mb_sec": 0 00:16:48.927 } 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "method": "bdev_iscsi_set_options", 00:16:48.927 "params": { 00:16:48.927 "timeout_sec": 30 00:16:48.927 } 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "method": "bdev_nvme_set_options", 00:16:48.927 "params": { 00:16:48.927 "action_on_timeout": "none", 00:16:48.927 "timeout_us": 0, 00:16:48.927 "timeout_admin_us": 0, 00:16:48.927 "keep_alive_timeout_ms": 10000, 00:16:48.927 "arbitration_burst": 0, 00:16:48.927 "low_priority_weight": 0, 00:16:48.927 "medium_priority_weight": 0, 00:16:48.927 "high_priority_weight": 0, 00:16:48.927 "nvme_adminq_poll_period_us": 10000, 00:16:48.927 "nvme_ioq_poll_period_us": 0, 00:16:48.927 "io_queue_requests": 512, 00:16:48.927 "delay_cmd_submit": true, 00:16:48.927 "transport_retry_count": 4, 00:16:48.927 "bdev_retry_count": 3, 00:16:48.927 "transport_ack_timeout": 0, 00:16:48.927 "ctrlr_loss_timeout_sec": 0, 00:16:48.927 "reconnect_delay_sec": 0, 00:16:48.927 "fast_io_fail_timeout_sec": 0, 00:16:48.927 "disable_auto_failback": false, 00:16:48.927 "generate_uuids": false, 00:16:48.927 "transport_tos": 0, 00:16:48.927 "nvme_error_stat": false, 00:16:48.927 "rdma_srq_size": 0, 00:16:48.927 "io_path_stat": false, 00:16:48.927 "allow_accel_sequence": false, 00:16:48.927 "rdma_max_cq_size": 0, 00:16:48.927 "rdma_cm_event_timeout_ms": 0, 00:16:48.927 "dhchap_digests": [ 00:16:48.927 "sha256", 00:16:48.927 "sha384", 00:16:48.927 "sha512" 00:16:48.927 ], 00:16:48.927 "dhchap_dhgroups": [ 00:16:48.927 "null", 00:16:48.927 "ffdhe2048", 00:16:48.927 "ffdhe3072", 00:16:48.927 "ffdhe4096", 00:16:48.927 "ffdhe6144", 00:16:48.927 "ffdhe8192" 00:16:48.927 ] 00:16:48.927 } 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "method": "bdev_nvme_attach_controller", 00:16:48.927 "params": { 00:16:48.927 "name": "TLSTEST", 00:16:48.927 "trtype": "TCP", 00:16:48.927 "adrfam": "IPv4", 00:16:48.927 "traddr": "10.0.0.3", 00:16:48.927 "trsvcid": "4420", 00:16:48.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.927 "prchk_reftag": false, 00:16:48.927 "prchk_guard": false, 00:16:48.927 "ctrlr_loss_timeout_sec": 0, 00:16:48.927 "reconnect_delay_sec": 0, 00:16:48.927 "fast_io_fail_timeout_sec": 0, 00:16:48.927 "psk": "key0", 00:16:48.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.927 "hdgst": false, 00:16:48.927 "ddgst": false, 00:16:48.927 "multipath": "multipath" 00:16:48.927 } 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "method": "bdev_nvme_set_hotplug", 00:16:48.927 "params": { 00:16:48.927 "period_us": 100000, 00:16:48.927 "enable": false 00:16:48.927 } 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "method": "bdev_wait_for_examine" 00:16:48.927 } 00:16:48.927 ] 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "subsystem": "nbd", 00:16:48.927 "config": [] 00:16:48.927 } 00:16:48.927 ] 00:16:48.927 }' 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71990 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71990 ']' 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71990 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71990 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:48.927 killing process with pid 71990 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71990' 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71990 00:16:48.927 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.927 00:16:48.927 Latency(us) 00:16:48.927 [2024-11-04T17:17:49.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.927 [2024-11-04T17:17:49.731Z] =================================================================================================================== 00:16:48.927 [2024-11-04T17:17:49.731Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.927 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71990 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71942 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71942 ']' 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71942 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71942 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:49.186 killing process with pid 71942 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71942' 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71942 00:16:49.186 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71942 00:16:49.444 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:49.445 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.445 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:49.445 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:49.445 "subsystems": [ 00:16:49.445 { 00:16:49.445 "subsystem": "keyring", 00:16:49.445 "config": [ 00:16:49.445 { 00:16:49.445 "method": "keyring_file_add_key", 00:16:49.445 "params": { 00:16:49.445 "name": "key0", 00:16:49.445 "path": "/tmp/tmp.6sFWvfQYsR" 00:16:49.445 } 00:16:49.445 } 00:16:49.445 ] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "iobuf", 00:16:49.445 "config": [ 00:16:49.445 { 00:16:49.445 "method": "iobuf_set_options", 00:16:49.445 "params": { 00:16:49.445 "small_pool_count": 8192, 00:16:49.445 "large_pool_count": 1024, 00:16:49.445 "small_bufsize": 8192, 00:16:49.445 "large_bufsize": 135168, 00:16:49.445 "enable_numa": false 00:16:49.445 } 00:16:49.445 } 00:16:49.445 ] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "sock", 00:16:49.445 "config": [ 00:16:49.445 { 00:16:49.445 "method": "sock_set_default_impl", 00:16:49.445 "params": { 00:16:49.445 "impl_name": "uring" 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "sock_impl_set_options", 00:16:49.445 "params": { 00:16:49.445 "impl_name": "ssl", 00:16:49.445 "recv_buf_size": 4096, 00:16:49.445 "send_buf_size": 4096, 00:16:49.445 "enable_recv_pipe": true, 00:16:49.445 "enable_quickack": false, 00:16:49.445 "enable_placement_id": 0, 00:16:49.445 "enable_zerocopy_send_server": true, 00:16:49.445 "enable_zerocopy_send_client": false, 00:16:49.445 "zerocopy_threshold": 0, 00:16:49.445 "tls_version": 0, 00:16:49.445 "enable_ktls": false 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "sock_impl_set_options", 00:16:49.445 "params": { 00:16:49.445 "impl_name": "posix", 00:16:49.445 "recv_buf_size": 2097152, 00:16:49.445 "send_buf_size": 2097152, 00:16:49.445 "enable_recv_pipe": true, 00:16:49.445 "enable_quickack": false, 00:16:49.445 "enable_placement_id": 0, 00:16:49.445 "enable_zerocopy_send_server": true, 00:16:49.445 "enable_zerocopy_send_client": false, 00:16:49.445 "zerocopy_threshold": 0, 00:16:49.445 "tls_version": 0, 00:16:49.445 "enable_ktls": false 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "sock_impl_set_options", 00:16:49.445 "params": { 00:16:49.445 "impl_name": "uring", 00:16:49.445 "recv_buf_size": 2097152, 00:16:49.445 "send_buf_size": 2097152, 00:16:49.445 "enable_recv_pipe": true, 00:16:49.445 "enable_quickack": false, 00:16:49.445 "enable_placement_id": 0, 00:16:49.445 "enable_zerocopy_send_server": false, 00:16:49.445 "enable_zerocopy_send_client": false, 00:16:49.445 "zerocopy_threshold": 0, 00:16:49.445 "tls_version": 0, 00:16:49.445 "enable_ktls": false 00:16:49.445 } 00:16:49.445 } 00:16:49.445 ] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "vmd", 00:16:49.445 "config": [] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "accel", 00:16:49.445 "config": [ 00:16:49.445 { 00:16:49.445 "method": "accel_set_options", 00:16:49.445 "params": { 00:16:49.445 "small_cache_size": 128, 00:16:49.445 "large_cache_size": 16, 00:16:49.445 "task_count": 2048, 00:16:49.445 "sequence_count": 2048, 00:16:49.445 "buf_count": 2048 00:16:49.445 } 00:16:49.445 } 00:16:49.445 ] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "bdev", 00:16:49.445 "config": [ 00:16:49.445 { 00:16:49.445 "method": "bdev_set_options", 00:16:49.445 "params": { 00:16:49.445 "bdev_io_pool_size": 65535, 00:16:49.445 "bdev_io_cache_size": 256, 00:16:49.445 "bdev_auto_examine": true, 00:16:49.445 "iobuf_small_cache_size": 128, 00:16:49.445 "iobuf_large_cache_size": 16 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "bdev_raid_set_options", 00:16:49.445 "params": { 00:16:49.445 "process_window_size_kb": 1024, 00:16:49.445 "process_max_bandwidth_mb_sec": 0 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "bdev_iscsi_set_options", 00:16:49.445 "params": { 00:16:49.445 "timeout_sec": 30 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "bdev_nvme_set_options", 00:16:49.445 "params": { 00:16:49.445 "action_on_timeout": "none", 00:16:49.445 "timeout_us": 0, 00:16:49.445 "timeout_admin_us": 0, 00:16:49.445 "keep_alive_timeout_ms": 10000, 00:16:49.445 "arbitration_burst": 0, 00:16:49.445 "low_priority_weight": 0, 00:16:49.445 "medium_priority_weight": 0, 00:16:49.445 "high_priority_weight": 0, 00:16:49.445 "nvme_adminq_poll_period_us": 10000, 00:16:49.445 "nvme_ioq_poll_period_us": 0, 00:16:49.445 "io_queue_requests": 0, 00:16:49.445 "delay_cmd_submit": true, 00:16:49.445 "transport_retry_count": 4, 00:16:49.445 "bdev_retry_count": 3, 00:16:49.445 "transport_ack_timeout": 0, 00:16:49.445 "ctrlr_loss_timeout_sec": 0, 00:16:49.445 "reconnect_delay_sec": 0, 00:16:49.445 "fast_io_fail_timeout_sec": 0, 00:16:49.445 "disable_auto_failback": false, 00:16:49.445 "generate_uuids": false, 00:16:49.445 "transport_tos": 0, 00:16:49.445 "nvme_error_stat": false, 00:16:49.445 "rdma_srq_size": 0, 00:16:49.445 "io_path_stat": false, 00:16:49.445 "allow_accel_sequence": false, 00:16:49.445 "rdma_max_cq_size": 0, 00:16:49.445 "rdma_cm_event_timeout_ms": 0, 00:16:49.445 "dhchap_digests": [ 00:16:49.445 "sha256", 00:16:49.445 "sha384", 00:16:49.445 "sha512" 00:16:49.445 ], 00:16:49.445 "dhchap_dhgroups": [ 00:16:49.445 "null", 00:16:49.445 "ffdhe2048", 00:16:49.445 "ffdhe3072", 00:16:49.445 "ffdhe4096", 00:16:49.445 "ffdhe6144", 00:16:49.445 "ffdhe8192" 00:16:49.445 ] 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "bdev_nvme_set_hotplug", 00:16:49.445 "params": { 00:16:49.445 "period_us": 100000, 00:16:49.445 "enable": false 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "bdev_malloc_create", 00:16:49.445 "params": { 00:16:49.445 "name": "malloc0", 00:16:49.445 "num_blocks": 8192, 00:16:49.445 "block_size": 4096, 00:16:49.445 "physical_block_size": 4096, 00:16:49.445 "uuid": "5f8badd3-7b91-4220-90d8-cd7ca60bcc17", 00:16:49.445 "optimal_io_boundary": 0, 00:16:49.445 "md_size": 0, 00:16:49.445 "dif_type": 0, 00:16:49.445 "dif_is_head_of_md": false, 00:16:49.445 "dif_pi_format": 0 00:16:49.445 } 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "method": "bdev_wait_for_examine" 00:16:49.445 } 00:16:49.445 ] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "nbd", 00:16:49.445 "config": [] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "scheduler", 00:16:49.445 "config": [ 00:16:49.445 { 00:16:49.445 "method": "framework_set_scheduler", 00:16:49.445 "params": { 00:16:49.445 "name": "static" 00:16:49.445 } 00:16:49.445 } 00:16:49.445 ] 00:16:49.445 }, 00:16:49.445 { 00:16:49.445 "subsystem": "nvmf", 00:16:49.445 "config": [ 00:16:49.445 { 00:16:49.445 "method": "nvmf_set_config", 00:16:49.445 "params": { 00:16:49.445 "discovery_filter": "match_any", 00:16:49.445 "admin_cmd_passthru": { 00:16:49.445 "identify_ctrlr": false 00:16:49.445 }, 00:16:49.445 "dhchap_digests": [ 00:16:49.445 "sha256", 00:16:49.445 "sha384", 00:16:49.445 "sha512" 00:16:49.445 ], 00:16:49.445 "dhchap_dhgroups": [ 00:16:49.445 "null", 00:16:49.445 "ffdhe2048", 00:16:49.445 "ffdhe3072", 00:16:49.445 "ffdhe4096", 00:16:49.445 "ffdhe6144", 00:16:49.445 "ffdhe8192" 00:16:49.445 ] 00:16:49.445 } 00:16:49.446 }, 00:16:49.446 { 00:16:49.446 "method": "nvmf_set_max_subsystems", 00:16:49.446 "params": { 00:16:49.446 "max_subsystems": 1024 00:16:49.446 } 00:16:49.446 }, 00:16:49.446 { 00:16:49.446 "method": "nvmf_set_crdt", 00:16:49.446 "params": { 00:16:49.446 "crdt1": 0, 00:16:49.446 "crdt2": 0, 00:16:49.446 "crdt3": 0 00:16:49.446 } 00:16:49.446 }, 00:16:49.446 { 00:16:49.446 "method": "nvmf_create_transport", 00:16:49.446 "params": { 00:16:49.446 "trtype": "TCP", 00:16:49.446 "max_queue_depth": 128, 00:16:49.446 "max_io_qpairs_per_ctrlr": 127, 00:16:49.446 "in_capsule_data_size": 4096, 00:16:49.446 "max_io_size": 131072, 00:16:49.446 "io_unit_size": 131072, 00:16:49.446 "max_aq_depth": 128, 00:16:49.446 "num_shared_buffers": 511, 00:16:49.446 "buf_cache_size": 4294967295, 00:16:49.446 "dif_insert_or_strip": false, 00:16:49.446 "zcopy": false, 00:16:49.446 "c2h_success": false, 00:16:49.446 "sock_priority": 0, 00:16:49.446 "abort_timeout_sec": 1, 00:16:49.446 "ack_timeout": 0, 00:16:49.446 "data_wr_pool_size": 0 00:16:49.446 } 00:16:49.446 }, 00:16:49.446 { 00:16:49.446 "method": "nvmf_create_subsystem", 00:16:49.446 "params": { 00:16:49.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.446 "allow_any_host": false, 00:16:49.446 "serial_number": "SPDK00000000000001", 00:16:49.446 "model_number": "SPDK bdev Controller", 00:16:49.446 "max_namespaces": 10, 00:16:49.446 "min_cntlid": 1, 00:16:49.446 "max_cntlid": 65519, 00:16:49.446 "ana_reporting": false 00:16:49.446 } 00:16:49.446 }, 00:16:49.446 { 00:16:49.446 "method": "nvmf_subsystem_add_host", 00:16:49.446 "params": { 00:16:49.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.446 "host": "nqn.2016-06.io.spdk:host1", 00:16:49.446 "psk": "key0" 00:16:49.446 } 00:16:49.446 }, 00:16:49.446 { 00:16:49.446 "method": "nvmf_subsystem_add_ns", 00:16:49.446 "params": { 00:16:49.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.446 "namespace": { 00:16:49.446 "nsid": 1, 00:16:49.446 "bdev_name": "malloc0", 00:16:49.446 "nguid": "5F8BADD37B91422090D8CD7CA60BCC17", 00:16:49.446 "uuid": "5f8badd3-7b91-4220-90d8-cd7ca60bcc17", 00:16:49.446 "no_auto_visible": false 00:16:49.446 } 00:16:49.446 } 00:16:49.446 }, 00:16:49.446 { 00:16:49.446 "method": "nvmf_subsystem_add_listener", 00:16:49.446 "params": { 00:16:49.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.446 "listen_address": { 00:16:49.446 "trtype": "TCP", 00:16:49.446 "adrfam": "IPv4", 00:16:49.446 "traddr": "10.0.0.3", 00:16:49.446 "trsvcid": "4420" 00:16:49.446 }, 00:16:49.446 "secure_channel": true 00:16:49.446 } 00:16:49.446 } 00:16:49.446 ] 00:16:49.446 } 00:16:49.446 ] 00:16:49.446 }' 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72032 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72032 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72032 ']' 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:49.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:49.446 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.446 [2024-11-04 17:17:50.224151] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:49.446 [2024-11-04 17:17:50.224250] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.704 [2024-11-04 17:17:50.365962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.704 [2024-11-04 17:17:50.418351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.704 [2024-11-04 17:17:50.418415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.704 [2024-11-04 17:17:50.418425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.704 [2024-11-04 17:17:50.418433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.704 [2024-11-04 17:17:50.418439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.704 [2024-11-04 17:17:50.418848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.963 [2024-11-04 17:17:50.588969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.963 [2024-11-04 17:17:50.668084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.963 [2024-11-04 17:17:50.700035] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.963 [2024-11-04 17:17:50.700264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72064 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72064 /var/tmp/bdevperf.sock 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72064 ']' 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.531 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:50.531 "subsystems": [ 00:16:50.531 { 00:16:50.531 "subsystem": "keyring", 00:16:50.531 "config": [ 00:16:50.531 { 00:16:50.531 "method": "keyring_file_add_key", 00:16:50.531 "params": { 00:16:50.531 "name": "key0", 00:16:50.531 "path": "/tmp/tmp.6sFWvfQYsR" 00:16:50.531 } 00:16:50.531 } 00:16:50.531 ] 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "subsystem": "iobuf", 00:16:50.531 "config": [ 00:16:50.531 { 00:16:50.531 "method": "iobuf_set_options", 00:16:50.531 "params": { 00:16:50.531 "small_pool_count": 8192, 00:16:50.531 "large_pool_count": 1024, 00:16:50.531 "small_bufsize": 8192, 00:16:50.531 "large_bufsize": 135168, 00:16:50.531 "enable_numa": false 00:16:50.531 } 00:16:50.531 } 00:16:50.531 ] 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "subsystem": "sock", 00:16:50.531 "config": [ 00:16:50.531 { 00:16:50.531 "method": "sock_set_default_impl", 00:16:50.531 "params": { 00:16:50.531 "impl_name": "uring" 00:16:50.531 } 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "method": "sock_impl_set_options", 00:16:50.531 "params": { 00:16:50.531 "impl_name": "ssl", 00:16:50.531 "recv_buf_size": 4096, 00:16:50.531 "send_buf_size": 4096, 00:16:50.531 "enable_recv_pipe": true, 00:16:50.531 "enable_quickack": false, 00:16:50.531 "enable_placement_id": 0, 00:16:50.531 "enable_zerocopy_send_server": true, 00:16:50.531 "enable_zerocopy_send_client": false, 00:16:50.531 "zerocopy_threshold": 0, 00:16:50.531 "tls_version": 0, 00:16:50.531 "enable_ktls": false 00:16:50.531 } 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "method": "sock_impl_set_options", 00:16:50.531 "params": { 00:16:50.531 "impl_name": "posix", 00:16:50.531 "recv_buf_size": 2097152, 00:16:50.531 "send_buf_size": 2097152, 00:16:50.531 "enable_recv_pipe": true, 00:16:50.531 "enable_quickack": false, 00:16:50.531 "enable_placement_id": 0, 00:16:50.531 "enable_zerocopy_send_server": true, 00:16:50.531 "enable_zerocopy_send_client": false, 00:16:50.531 "zerocopy_threshold": 0, 00:16:50.531 "tls_version": 0, 00:16:50.531 "enable_ktls": false 00:16:50.531 } 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "method": "sock_impl_set_options", 00:16:50.531 "params": { 00:16:50.531 "impl_name": "uring", 00:16:50.531 "recv_buf_size": 2097152, 00:16:50.531 "send_buf_size": 2097152, 00:16:50.531 "enable_recv_pipe": true, 00:16:50.531 "enable_quickack": false, 00:16:50.531 "enable_placement_id": 0, 00:16:50.531 "enable_zerocopy_send_server": false, 00:16:50.531 "enable_zerocopy_send_client": false, 00:16:50.531 "zerocopy_threshold": 0, 00:16:50.531 "tls_version": 0, 00:16:50.531 "enable_ktls": false 00:16:50.531 } 00:16:50.531 } 00:16:50.531 ] 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "subsystem": "vmd", 00:16:50.531 "config": [] 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "subsystem": "accel", 00:16:50.531 "config": [ 00:16:50.531 { 00:16:50.531 "method": "accel_set_options", 00:16:50.531 "params": { 00:16:50.531 "small_cache_size": 128, 00:16:50.531 "large_cache_size": 16, 00:16:50.531 "task_count": 2048, 00:16:50.531 "sequence_count": 2048, 00:16:50.531 "buf_count": 2048 00:16:50.531 } 00:16:50.531 } 00:16:50.531 ] 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "subsystem": "bdev", 00:16:50.531 "config": [ 00:16:50.531 { 00:16:50.531 "method": "bdev_set_options", 00:16:50.531 "params": { 00:16:50.531 "bdev_io_pool_size": 65535, 00:16:50.531 "bdev_io_cache_size": 256, 00:16:50.531 "bdev_auto_examine": true, 00:16:50.531 "iobuf_small_cache_size": 128, 00:16:50.531 "iobuf_large_cache_size": 16 00:16:50.531 } 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "method": "bdev_raid_set_options", 00:16:50.531 "params": { 00:16:50.531 "process_window_size_kb": 1024, 00:16:50.531 "process_max_bandwidth_mb_sec": 0 00:16:50.531 } 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "method": "bdev_iscsi_set_options", 00:16:50.531 "params": { 00:16:50.531 "timeout_sec": 30 00:16:50.531 } 00:16:50.531 }, 00:16:50.531 { 00:16:50.531 "method": "bdev_nvme_set_options", 00:16:50.531 "params": { 00:16:50.531 "action_on_timeout": "none", 00:16:50.531 "timeout_us": 0, 00:16:50.531 "timeout_admin_us": 0, 00:16:50.531 "keep_alive_timeout_ms": 10000, 00:16:50.532 "arbitration_burst": 0, 00:16:50.532 "low_priority_weight": 0, 00:16:50.532 "medium_priority_weight": 0, 00:16:50.532 "high_priority_weight": 0, 00:16:50.532 "nvme_adminq_poll_period_us": 10000, 00:16:50.532 "nvme_ioq_poll_period_us": 0, 00:16:50.532 "io_queue_requests": 512, 00:16:50.532 "delay_cmd_submit": true, 00:16:50.532 "transport_retry_count": 4, 00:16:50.532 "bdev_retry_count": 3, 00:16:50.532 "transport_ack_timeout": 0, 00:16:50.532 "ctrlr_loss_timeout_sec": 0, 00:16:50.532 "reconnect_delay_sec": 0, 00:16:50.532 "fast_io_fail_timeout_sec": 0, 00:16:50.532 "disable_auto_failback": false, 00:16:50.532 "generate_uuids": false, 00:16:50.532 "transport_tos": 0, 00:16:50.532 "nvme_error_stat": false, 00:16:50.532 "rdma_srq_size": 0, 00:16:50.532 "io_path_stat": false, 00:16:50.532 "allow_accel_sequence": false, 00:16:50.532 "rdma_max_cq_size": 0, 00:16:50.532 "rdma_cm_event_timeout_ms": 0, 00:16:50.532 "dhchap_digests": [ 00:16:50.532 "sha256", 00:16:50.532 "sha384", 00:16:50.532 "sha512" 00:16:50.532 ], 00:16:50.532 "dhchap_dhgroups": [ 00:16:50.532 "null", 00:16:50.532 "ffdhe2048", 00:16:50.532 "ffdhe3072", 00:16:50.532 "ffdhe4096", 00:16:50.532 "ffdhe6144", 00:16:50.532 "ffdhe8192" 00:16:50.532 ] 00:16:50.532 } 00:16:50.532 }, 00:16:50.532 { 00:16:50.532 "method": "bdev_nvme_attach_controller", 00:16:50.532 "params": { 00:16:50.532 "name": "TLSTEST", 00:16:50.532 "trtype": "TCP", 00:16:50.532 "adrfam": "IPv4", 00:16:50.532 "traddr": "10.0.0.3", 00:16:50.532 "trsvcid": "4420", 00:16:50.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.532 "prchk_reftag": false, 00:16:50.532 "prchk_guard": false, 00:16:50.532 "ctrlr_loss_timeout_sec": 0, 00:16:50.532 "reconnect_delay_sec": 0, 00:16:50.532 "fast_io_fail_timeout_sec": 0, 00:16:50.532 "psk": "key0", 00:16:50.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.532 "hdgst": false, 00:16:50.532 "ddgst": false, 00:16:50.532 "multipath": "multipath" 00:16:50.532 } 00:16:50.532 }, 00:16:50.532 { 00:16:50.532 "method": "bdev_nvme_set_hotplug", 00:16:50.532 "params": { 00:16:50.532 "period_us": 100000, 00:16:50.532 "enable": false 00:16:50.532 } 00:16:50.532 }, 00:16:50.532 { 00:16:50.532 "method": "bdev_wait_for_examine" 00:16:50.532 } 00:16:50.532 ] 00:16:50.532 }, 00:16:50.532 { 00:16:50.532 "subsystem": "nbd", 00:16:50.532 "config": [] 00:16:50.532 } 00:16:50.532 ] 00:16:50.532 }' 00:16:50.791 [2024-11-04 17:17:51.350048] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:16:50.791 [2024-11-04 17:17:51.350155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72064 ] 00:16:50.791 [2024-11-04 17:17:51.495473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.791 [2024-11-04 17:17:51.548030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.050 [2024-11-04 17:17:51.684737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.050 [2024-11-04 17:17:51.734064] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.617 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:51.617 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:16:51.617 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:51.875 Running I/O for 10 seconds... 00:16:53.747 4362.00 IOPS, 17.04 MiB/s [2024-11-04T17:17:56.054Z] 4374.00 IOPS, 17.09 MiB/s [2024-11-04T17:17:56.619Z] 4383.00 IOPS, 17.12 MiB/s [2024-11-04T17:17:57.554Z] 4385.25 IOPS, 17.13 MiB/s [2024-11-04T17:17:58.930Z] 4413.80 IOPS, 17.24 MiB/s [2024-11-04T17:17:59.864Z] 4410.67 IOPS, 17.23 MiB/s [2024-11-04T17:18:00.800Z] 4361.71 IOPS, 17.04 MiB/s [2024-11-04T17:18:01.736Z] 4283.38 IOPS, 16.73 MiB/s [2024-11-04T17:18:02.672Z] 4279.00 IOPS, 16.71 MiB/s [2024-11-04T17:18:02.672Z] 4277.70 IOPS, 16.71 MiB/s 00:17:01.868 Latency(us) 00:17:01.868 [2024-11-04T17:18:02.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.868 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:01.868 Verification LBA range: start 0x0 length 0x2000 00:17:01.868 TLSTESTn1 : 10.01 4283.79 16.73 0.00 0.00 29827.23 5659.93 36700.16 00:17:01.868 [2024-11-04T17:18:02.672Z] =================================================================================================================== 00:17:01.868 [2024-11-04T17:18:02.673Z] Total : 4283.79 16.73 0.00 0.00 29827.23 5659.93 36700.16 00:17:01.869 { 00:17:01.869 "results": [ 00:17:01.869 { 00:17:01.869 "job": "TLSTESTn1", 00:17:01.869 "core_mask": "0x4", 00:17:01.869 "workload": "verify", 00:17:01.869 "status": "finished", 00:17:01.869 "verify_range": { 00:17:01.869 "start": 0, 00:17:01.869 "length": 8192 00:17:01.869 }, 00:17:01.869 "queue_depth": 128, 00:17:01.869 "io_size": 4096, 00:17:01.869 "runtime": 10.014961, 00:17:01.869 "iops": 4283.791020254597, 00:17:01.869 "mibps": 16.73355867286952, 00:17:01.869 "io_failed": 0, 00:17:01.869 "io_timeout": 0, 00:17:01.869 "avg_latency_us": 29827.232360601964, 00:17:01.869 "min_latency_us": 5659.927272727273, 00:17:01.869 "max_latency_us": 36700.16 00:17:01.869 } 00:17:01.869 ], 00:17:01.869 "core_count": 1 00:17:01.869 } 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72064 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72064 ']' 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72064 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72064 00:17:01.869 killing process with pid 72064 00:17:01.869 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.869 00:17:01.869 Latency(us) 00:17:01.869 [2024-11-04T17:18:02.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.869 [2024-11-04T17:18:02.673Z] =================================================================================================================== 00:17:01.869 [2024-11-04T17:18:02.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72064' 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72064 00:17:01.869 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72064 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72032 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72032 ']' 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72032 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72032 00:17:02.128 killing process with pid 72032 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72032' 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72032 00:17:02.128 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72032 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72197 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72197 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72197 ']' 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:02.387 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.387 [2024-11-04 17:18:03.083480] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:02.387 [2024-11-04 17:18:03.083595] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.645 [2024-11-04 17:18:03.226819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.645 [2024-11-04 17:18:03.274625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.645 [2024-11-04 17:18:03.274682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.645 [2024-11-04 17:18:03.274710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.645 [2024-11-04 17:18:03.274718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.645 [2024-11-04 17:18:03.274725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.645 [2024-11-04 17:18:03.275148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.645 [2024-11-04 17:18:03.331127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:03.581 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:03.581 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.6sFWvfQYsR 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6sFWvfQYsR 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:03.582 [2024-11-04 17:18:04.334598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.582 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:04.148 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:04.148 [2024-11-04 17:18:04.882895] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:04.148 [2024-11-04 17:18:04.883164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:04.148 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:04.407 malloc0 00:17:04.407 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:04.666 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:17:04.924 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72260 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72260 /var/tmp/bdevperf.sock 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72260 ']' 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:05.183 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.183 [2024-11-04 17:18:05.946136] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:05.183 [2024-11-04 17:18:05.946516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72260 ] 00:17:05.443 [2024-11-04 17:18:06.084916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.443 [2024-11-04 17:18:06.143694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.443 [2024-11-04 17:18:06.201131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:05.701 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.701 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:05.701 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:17:05.960 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:05.960 [2024-11-04 17:18:06.727192] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.219 nvme0n1 00:17:06.219 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:06.219 Running I/O for 1 seconds... 00:17:07.155 4224.00 IOPS, 16.50 MiB/s 00:17:07.155 Latency(us) 00:17:07.155 [2024-11-04T17:18:07.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.155 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:07.155 Verification LBA range: start 0x0 length 0x2000 00:17:07.155 nvme0n1 : 1.02 4277.85 16.71 0.00 0.00 29595.32 7179.17 18469.24 00:17:07.155 [2024-11-04T17:18:07.959Z] =================================================================================================================== 00:17:07.155 [2024-11-04T17:18:07.959Z] Total : 4277.85 16.71 0.00 0.00 29595.32 7179.17 18469.24 00:17:07.155 { 00:17:07.155 "results": [ 00:17:07.155 { 00:17:07.155 "job": "nvme0n1", 00:17:07.155 "core_mask": "0x2", 00:17:07.155 "workload": "verify", 00:17:07.155 "status": "finished", 00:17:07.155 "verify_range": { 00:17:07.155 "start": 0, 00:17:07.155 "length": 8192 00:17:07.155 }, 00:17:07.155 "queue_depth": 128, 00:17:07.155 "io_size": 4096, 00:17:07.155 "runtime": 1.017334, 00:17:07.155 "iops": 4277.8477864693405, 00:17:07.155 "mibps": 16.71034291589586, 00:17:07.155 "io_failed": 0, 00:17:07.155 "io_timeout": 0, 00:17:07.155 "avg_latency_us": 29595.3214973262, 00:17:07.155 "min_latency_us": 7179.170909090909, 00:17:07.155 "max_latency_us": 18469.236363636363 00:17:07.155 } 00:17:07.155 ], 00:17:07.155 "core_count": 1 00:17:07.155 } 00:17:07.414 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72260 00:17:07.414 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72260 ']' 00:17:07.414 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72260 00:17:07.414 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:07.414 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:07.415 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72260 00:17:07.415 killing process with pid 72260 00:17:07.415 Received shutdown signal, test time was about 1.000000 seconds 00:17:07.415 00:17:07.415 Latency(us) 00:17:07.415 [2024-11-04T17:18:08.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.415 [2024-11-04T17:18:08.219Z] =================================================================================================================== 00:17:07.415 [2024-11-04T17:18:08.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.415 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:07.415 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:07.415 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72260' 00:17:07.415 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72260 00:17:07.415 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72260 00:17:07.415 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72197 00:17:07.415 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72197 ']' 00:17:07.415 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72197 00:17:07.415 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:07.415 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:07.415 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72197 00:17:07.674 killing process with pid 72197 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72197' 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72197 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72197 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72304 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72304 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72304 ']' 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:07.674 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:07.933 [2024-11-04 17:18:08.478093] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:07.933 [2024-11-04 17:18:08.478487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.933 [2024-11-04 17:18:08.619259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.933 [2024-11-04 17:18:08.677603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.933 [2024-11-04 17:18:08.677664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.933 [2024-11-04 17:18:08.677676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.933 [2024-11-04 17:18:08.677684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.933 [2024-11-04 17:18:08.677691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.933 [2024-11-04 17:18:08.678134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.933 [2024-11-04 17:18:08.734863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.212 [2024-11-04 17:18:08.853070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.212 malloc0 00:17:08.212 [2024-11-04 17:18:08.885705] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.212 [2024-11-04 17:18:08.885974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72323 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72323 /var/tmp/bdevperf.sock 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72323 ']' 00:17:08.212 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.213 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:08.213 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.213 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:08.213 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.213 [2024-11-04 17:18:08.977192] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:08.213 [2024-11-04 17:18:08.977606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72323 ] 00:17:08.484 [2024-11-04 17:18:09.126530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.484 [2024-11-04 17:18:09.183853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.484 [2024-11-04 17:18:09.242614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.742 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.742 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:08.742 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6sFWvfQYsR 00:17:09.000 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:09.259 [2024-11-04 17:18:09.826046] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.259 nvme0n1 00:17:09.259 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.259 Running I/O for 1 seconds... 00:17:10.646 4153.00 IOPS, 16.22 MiB/s 00:17:10.646 Latency(us) 00:17:10.646 [2024-11-04T17:18:11.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.646 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:10.646 Verification LBA range: start 0x0 length 0x2000 00:17:10.646 nvme0n1 : 1.02 4186.08 16.35 0.00 0.00 30164.23 6434.44 22639.71 00:17:10.646 [2024-11-04T17:18:11.450Z] =================================================================================================================== 00:17:10.646 [2024-11-04T17:18:11.450Z] Total : 4186.08 16.35 0.00 0.00 30164.23 6434.44 22639.71 00:17:10.646 { 00:17:10.646 "results": [ 00:17:10.646 { 00:17:10.646 "job": "nvme0n1", 00:17:10.646 "core_mask": "0x2", 00:17:10.646 "workload": "verify", 00:17:10.646 "status": "finished", 00:17:10.646 "verify_range": { 00:17:10.646 "start": 0, 00:17:10.646 "length": 8192 00:17:10.646 }, 00:17:10.646 "queue_depth": 128, 00:17:10.646 "io_size": 4096, 00:17:10.646 "runtime": 1.022676, 00:17:10.646 "iops": 4186.076528636636, 00:17:10.646 "mibps": 16.35186143998686, 00:17:10.646 "io_failed": 0, 00:17:10.646 "io_timeout": 0, 00:17:10.646 "avg_latency_us": 30164.230963878446, 00:17:10.646 "min_latency_us": 6434.443636363636, 00:17:10.646 "max_latency_us": 22639.70909090909 00:17:10.646 } 00:17:10.646 ], 00:17:10.646 "core_count": 1 00:17:10.646 } 00:17:10.646 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:10.646 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.646 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.646 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.646 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:10.646 "subsystems": [ 00:17:10.646 { 00:17:10.646 "subsystem": "keyring", 00:17:10.646 "config": [ 00:17:10.646 { 00:17:10.646 "method": "keyring_file_add_key", 00:17:10.646 "params": { 00:17:10.646 "name": "key0", 00:17:10.646 "path": "/tmp/tmp.6sFWvfQYsR" 00:17:10.646 } 00:17:10.646 } 00:17:10.646 ] 00:17:10.646 }, 00:17:10.646 { 00:17:10.646 "subsystem": "iobuf", 00:17:10.646 "config": [ 00:17:10.646 { 00:17:10.646 "method": "iobuf_set_options", 00:17:10.646 "params": { 00:17:10.646 "small_pool_count": 8192, 00:17:10.646 "large_pool_count": 1024, 00:17:10.646 "small_bufsize": 8192, 00:17:10.646 "large_bufsize": 135168, 00:17:10.646 "enable_numa": false 00:17:10.646 } 00:17:10.646 } 00:17:10.646 ] 00:17:10.646 }, 00:17:10.646 { 00:17:10.646 "subsystem": "sock", 00:17:10.646 "config": [ 00:17:10.646 { 00:17:10.646 "method": "sock_set_default_impl", 00:17:10.646 "params": { 00:17:10.646 "impl_name": "uring" 00:17:10.646 } 00:17:10.646 }, 00:17:10.646 { 00:17:10.646 "method": "sock_impl_set_options", 00:17:10.646 "params": { 00:17:10.646 "impl_name": "ssl", 00:17:10.647 "recv_buf_size": 4096, 00:17:10.647 "send_buf_size": 4096, 00:17:10.647 "enable_recv_pipe": true, 00:17:10.647 "enable_quickack": false, 00:17:10.647 "enable_placement_id": 0, 00:17:10.647 "enable_zerocopy_send_server": true, 00:17:10.647 "enable_zerocopy_send_client": false, 00:17:10.647 "zerocopy_threshold": 0, 00:17:10.647 "tls_version": 0, 00:17:10.647 "enable_ktls": false 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "sock_impl_set_options", 00:17:10.647 "params": { 00:17:10.647 "impl_name": "posix", 00:17:10.647 "recv_buf_size": 2097152, 00:17:10.647 "send_buf_size": 2097152, 00:17:10.647 "enable_recv_pipe": true, 00:17:10.647 "enable_quickack": false, 00:17:10.647 "enable_placement_id": 0, 00:17:10.647 "enable_zerocopy_send_server": true, 00:17:10.647 "enable_zerocopy_send_client": false, 00:17:10.647 "zerocopy_threshold": 0, 00:17:10.647 "tls_version": 0, 00:17:10.647 "enable_ktls": false 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "sock_impl_set_options", 00:17:10.647 "params": { 00:17:10.647 "impl_name": "uring", 00:17:10.647 "recv_buf_size": 2097152, 00:17:10.647 "send_buf_size": 2097152, 00:17:10.647 "enable_recv_pipe": true, 00:17:10.647 "enable_quickack": false, 00:17:10.647 "enable_placement_id": 0, 00:17:10.647 "enable_zerocopy_send_server": false, 00:17:10.647 "enable_zerocopy_send_client": false, 00:17:10.647 "zerocopy_threshold": 0, 00:17:10.647 "tls_version": 0, 00:17:10.647 "enable_ktls": false 00:17:10.647 } 00:17:10.647 } 00:17:10.647 ] 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "subsystem": "vmd", 00:17:10.647 "config": [] 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "subsystem": "accel", 00:17:10.647 "config": [ 00:17:10.647 { 00:17:10.647 "method": "accel_set_options", 00:17:10.647 "params": { 00:17:10.647 "small_cache_size": 128, 00:17:10.647 "large_cache_size": 16, 00:17:10.647 "task_count": 2048, 00:17:10.647 "sequence_count": 2048, 00:17:10.647 "buf_count": 2048 00:17:10.647 } 00:17:10.647 } 00:17:10.647 ] 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "subsystem": "bdev", 00:17:10.647 "config": [ 00:17:10.647 { 00:17:10.647 "method": "bdev_set_options", 00:17:10.647 "params": { 00:17:10.647 "bdev_io_pool_size": 65535, 00:17:10.647 "bdev_io_cache_size": 256, 00:17:10.647 "bdev_auto_examine": true, 00:17:10.647 "iobuf_small_cache_size": 128, 00:17:10.647 "iobuf_large_cache_size": 16 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "bdev_raid_set_options", 00:17:10.647 "params": { 00:17:10.647 "process_window_size_kb": 1024, 00:17:10.647 "process_max_bandwidth_mb_sec": 0 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "bdev_iscsi_set_options", 00:17:10.647 "params": { 00:17:10.647 "timeout_sec": 30 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "bdev_nvme_set_options", 00:17:10.647 "params": { 00:17:10.647 "action_on_timeout": "none", 00:17:10.647 "timeout_us": 0, 00:17:10.647 "timeout_admin_us": 0, 00:17:10.647 "keep_alive_timeout_ms": 10000, 00:17:10.647 "arbitration_burst": 0, 00:17:10.647 "low_priority_weight": 0, 00:17:10.647 "medium_priority_weight": 0, 00:17:10.647 "high_priority_weight": 0, 00:17:10.647 "nvme_adminq_poll_period_us": 10000, 00:17:10.647 "nvme_ioq_poll_period_us": 0, 00:17:10.647 "io_queue_requests": 0, 00:17:10.647 "delay_cmd_submit": true, 00:17:10.647 "transport_retry_count": 4, 00:17:10.647 "bdev_retry_count": 3, 00:17:10.647 "transport_ack_timeout": 0, 00:17:10.647 "ctrlr_loss_timeout_sec": 0, 00:17:10.647 "reconnect_delay_sec": 0, 00:17:10.647 "fast_io_fail_timeout_sec": 0, 00:17:10.647 "disable_auto_failback": false, 00:17:10.647 "generate_uuids": false, 00:17:10.647 "transport_tos": 0, 00:17:10.647 "nvme_error_stat": false, 00:17:10.647 "rdma_srq_size": 0, 00:17:10.647 "io_path_stat": false, 00:17:10.647 "allow_accel_sequence": false, 00:17:10.647 "rdma_max_cq_size": 0, 00:17:10.647 "rdma_cm_event_timeout_ms": 0, 00:17:10.647 "dhchap_digests": [ 00:17:10.647 "sha256", 00:17:10.647 "sha384", 00:17:10.647 "sha512" 00:17:10.647 ], 00:17:10.647 "dhchap_dhgroups": [ 00:17:10.647 "null", 00:17:10.647 "ffdhe2048", 00:17:10.647 "ffdhe3072", 00:17:10.647 "ffdhe4096", 00:17:10.647 "ffdhe6144", 00:17:10.647 "ffdhe8192" 00:17:10.647 ] 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "bdev_nvme_set_hotplug", 00:17:10.647 "params": { 00:17:10.647 "period_us": 100000, 00:17:10.647 "enable": false 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "bdev_malloc_create", 00:17:10.647 "params": { 00:17:10.647 "name": "malloc0", 00:17:10.647 "num_blocks": 8192, 00:17:10.647 "block_size": 4096, 00:17:10.647 "physical_block_size": 4096, 00:17:10.647 "uuid": "93d7a76e-b1c7-4651-807c-bdba34d27b23", 00:17:10.647 "optimal_io_boundary": 0, 00:17:10.647 "md_size": 0, 00:17:10.647 "dif_type": 0, 00:17:10.647 "dif_is_head_of_md": false, 00:17:10.647 "dif_pi_format": 0 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "bdev_wait_for_examine" 00:17:10.647 } 00:17:10.647 ] 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "subsystem": "nbd", 00:17:10.647 "config": [] 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "subsystem": "scheduler", 00:17:10.647 "config": [ 00:17:10.647 { 00:17:10.647 "method": "framework_set_scheduler", 00:17:10.647 "params": { 00:17:10.647 "name": "static" 00:17:10.647 } 00:17:10.647 } 00:17:10.647 ] 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "subsystem": "nvmf", 00:17:10.647 "config": [ 00:17:10.647 { 00:17:10.647 "method": "nvmf_set_config", 00:17:10.647 "params": { 00:17:10.647 "discovery_filter": "match_any", 00:17:10.647 "admin_cmd_passthru": { 00:17:10.647 "identify_ctrlr": false 00:17:10.647 }, 00:17:10.647 "dhchap_digests": [ 00:17:10.647 "sha256", 00:17:10.647 "sha384", 00:17:10.647 "sha512" 00:17:10.647 ], 00:17:10.647 "dhchap_dhgroups": [ 00:17:10.647 "null", 00:17:10.647 "ffdhe2048", 00:17:10.647 "ffdhe3072", 00:17:10.647 "ffdhe4096", 00:17:10.647 "ffdhe6144", 00:17:10.647 "ffdhe8192" 00:17:10.647 ] 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "nvmf_set_max_subsystems", 00:17:10.647 "params": { 00:17:10.647 "max_subsystems": 1024 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "nvmf_set_crdt", 00:17:10.647 "params": { 00:17:10.647 "crdt1": 0, 00:17:10.647 "crdt2": 0, 00:17:10.647 "crdt3": 0 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "nvmf_create_transport", 00:17:10.647 "params": { 00:17:10.647 "trtype": "TCP", 00:17:10.647 "max_queue_depth": 128, 00:17:10.647 "max_io_qpairs_per_ctrlr": 127, 00:17:10.647 "in_capsule_data_size": 4096, 00:17:10.647 "max_io_size": 131072, 00:17:10.647 "io_unit_size": 131072, 00:17:10.647 "max_aq_depth": 128, 00:17:10.647 "num_shared_buffers": 511, 00:17:10.647 "buf_cache_size": 4294967295, 00:17:10.647 "dif_insert_or_strip": false, 00:17:10.647 "zcopy": false, 00:17:10.647 "c2h_success": false, 00:17:10.647 "sock_priority": 0, 00:17:10.647 "abort_timeout_sec": 1, 00:17:10.647 "ack_timeout": 0, 00:17:10.647 "data_wr_pool_size": 0 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "nvmf_create_subsystem", 00:17:10.647 "params": { 00:17:10.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.647 "allow_any_host": false, 00:17:10.647 "serial_number": "00000000000000000000", 00:17:10.647 "model_number": "SPDK bdev Controller", 00:17:10.647 "max_namespaces": 32, 00:17:10.647 "min_cntlid": 1, 00:17:10.647 "max_cntlid": 65519, 00:17:10.647 "ana_reporting": false 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "nvmf_subsystem_add_host", 00:17:10.647 "params": { 00:17:10.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.647 "host": "nqn.2016-06.io.spdk:host1", 00:17:10.647 "psk": "key0" 00:17:10.647 } 00:17:10.647 }, 00:17:10.647 { 00:17:10.647 "method": "nvmf_subsystem_add_ns", 00:17:10.648 "params": { 00:17:10.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.648 "namespace": { 00:17:10.648 "nsid": 1, 00:17:10.648 "bdev_name": "malloc0", 00:17:10.648 "nguid": "93D7A76EB1C74651807CBDBA34D27B23", 00:17:10.648 "uuid": "93d7a76e-b1c7-4651-807c-bdba34d27b23", 00:17:10.648 "no_auto_visible": false 00:17:10.648 } 00:17:10.648 } 00:17:10.648 }, 00:17:10.648 { 00:17:10.648 "method": "nvmf_subsystem_add_listener", 00:17:10.648 "params": { 00:17:10.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.648 "listen_address": { 00:17:10.648 "trtype": "TCP", 00:17:10.648 "adrfam": "IPv4", 00:17:10.648 "traddr": "10.0.0.3", 00:17:10.648 "trsvcid": "4420" 00:17:10.648 }, 00:17:10.648 "secure_channel": false, 00:17:10.648 "sock_impl": "ssl" 00:17:10.648 } 00:17:10.648 } 00:17:10.648 ] 00:17:10.648 } 00:17:10.648 ] 00:17:10.648 }' 00:17:10.648 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:10.907 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:10.907 "subsystems": [ 00:17:10.907 { 00:17:10.907 "subsystem": "keyring", 00:17:10.907 "config": [ 00:17:10.907 { 00:17:10.907 "method": "keyring_file_add_key", 00:17:10.907 "params": { 00:17:10.907 "name": "key0", 00:17:10.907 "path": "/tmp/tmp.6sFWvfQYsR" 00:17:10.907 } 00:17:10.907 } 00:17:10.907 ] 00:17:10.907 }, 00:17:10.907 { 00:17:10.907 "subsystem": "iobuf", 00:17:10.907 "config": [ 00:17:10.907 { 00:17:10.907 "method": "iobuf_set_options", 00:17:10.907 "params": { 00:17:10.907 "small_pool_count": 8192, 00:17:10.907 "large_pool_count": 1024, 00:17:10.907 "small_bufsize": 8192, 00:17:10.907 "large_bufsize": 135168, 00:17:10.907 "enable_numa": false 00:17:10.907 } 00:17:10.907 } 00:17:10.907 ] 00:17:10.907 }, 00:17:10.907 { 00:17:10.907 "subsystem": "sock", 00:17:10.907 "config": [ 00:17:10.907 { 00:17:10.907 "method": "sock_set_default_impl", 00:17:10.907 "params": { 00:17:10.907 "impl_name": "uring" 00:17:10.907 } 00:17:10.907 }, 00:17:10.907 { 00:17:10.907 "method": "sock_impl_set_options", 00:17:10.907 "params": { 00:17:10.907 "impl_name": "ssl", 00:17:10.907 "recv_buf_size": 4096, 00:17:10.907 "send_buf_size": 4096, 00:17:10.907 "enable_recv_pipe": true, 00:17:10.907 "enable_quickack": false, 00:17:10.907 "enable_placement_id": 0, 00:17:10.907 "enable_zerocopy_send_server": true, 00:17:10.907 "enable_zerocopy_send_client": false, 00:17:10.907 "zerocopy_threshold": 0, 00:17:10.907 "tls_version": 0, 00:17:10.907 "enable_ktls": false 00:17:10.907 } 00:17:10.907 }, 00:17:10.907 { 00:17:10.907 "method": "sock_impl_set_options", 00:17:10.907 "params": { 00:17:10.907 "impl_name": "posix", 00:17:10.907 "recv_buf_size": 2097152, 00:17:10.907 "send_buf_size": 2097152, 00:17:10.907 "enable_recv_pipe": true, 00:17:10.907 "enable_quickack": false, 00:17:10.907 "enable_placement_id": 0, 00:17:10.907 "enable_zerocopy_send_server": true, 00:17:10.907 "enable_zerocopy_send_client": false, 00:17:10.907 "zerocopy_threshold": 0, 00:17:10.907 "tls_version": 0, 00:17:10.907 "enable_ktls": false 00:17:10.907 } 00:17:10.907 }, 00:17:10.907 { 00:17:10.907 "method": "sock_impl_set_options", 00:17:10.907 "params": { 00:17:10.907 "impl_name": "uring", 00:17:10.907 "recv_buf_size": 2097152, 00:17:10.907 "send_buf_size": 2097152, 00:17:10.908 "enable_recv_pipe": true, 00:17:10.908 "enable_quickack": false, 00:17:10.908 "enable_placement_id": 0, 00:17:10.908 "enable_zerocopy_send_server": false, 00:17:10.908 "enable_zerocopy_send_client": false, 00:17:10.908 "zerocopy_threshold": 0, 00:17:10.908 "tls_version": 0, 00:17:10.908 "enable_ktls": false 00:17:10.908 } 00:17:10.908 } 00:17:10.908 ] 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "subsystem": "vmd", 00:17:10.908 "config": [] 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "subsystem": "accel", 00:17:10.908 "config": [ 00:17:10.908 { 00:17:10.908 "method": "accel_set_options", 00:17:10.908 "params": { 00:17:10.908 "small_cache_size": 128, 00:17:10.908 "large_cache_size": 16, 00:17:10.908 "task_count": 2048, 00:17:10.908 "sequence_count": 2048, 00:17:10.908 "buf_count": 2048 00:17:10.908 } 00:17:10.908 } 00:17:10.908 ] 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "subsystem": "bdev", 00:17:10.908 "config": [ 00:17:10.908 { 00:17:10.908 "method": "bdev_set_options", 00:17:10.908 "params": { 00:17:10.908 "bdev_io_pool_size": 65535, 00:17:10.908 "bdev_io_cache_size": 256, 00:17:10.908 "bdev_auto_examine": true, 00:17:10.908 "iobuf_small_cache_size": 128, 00:17:10.908 "iobuf_large_cache_size": 16 00:17:10.908 } 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "method": "bdev_raid_set_options", 00:17:10.908 "params": { 00:17:10.908 "process_window_size_kb": 1024, 00:17:10.908 "process_max_bandwidth_mb_sec": 0 00:17:10.908 } 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "method": "bdev_iscsi_set_options", 00:17:10.908 "params": { 00:17:10.908 "timeout_sec": 30 00:17:10.908 } 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "method": "bdev_nvme_set_options", 00:17:10.908 "params": { 00:17:10.908 "action_on_timeout": "none", 00:17:10.908 "timeout_us": 0, 00:17:10.908 "timeout_admin_us": 0, 00:17:10.908 "keep_alive_timeout_ms": 10000, 00:17:10.908 "arbitration_burst": 0, 00:17:10.908 "low_priority_weight": 0, 00:17:10.908 "medium_priority_weight": 0, 00:17:10.908 "high_priority_weight": 0, 00:17:10.908 "nvme_adminq_poll_period_us": 10000, 00:17:10.908 "nvme_ioq_poll_period_us": 0, 00:17:10.908 "io_queue_requests": 512, 00:17:10.908 "delay_cmd_submit": true, 00:17:10.908 "transport_retry_count": 4, 00:17:10.908 "bdev_retry_count": 3, 00:17:10.908 "transport_ack_timeout": 0, 00:17:10.908 "ctrlr_loss_timeout_sec": 0, 00:17:10.908 "reconnect_delay_sec": 0, 00:17:10.908 "fast_io_fail_timeout_sec": 0, 00:17:10.908 "disable_auto_failback": false, 00:17:10.908 "generate_uuids": false, 00:17:10.908 "transport_tos": 0, 00:17:10.908 "nvme_error_stat": false, 00:17:10.908 "rdma_srq_size": 0, 00:17:10.908 "io_path_stat": false, 00:17:10.908 "allow_accel_sequence": false, 00:17:10.908 "rdma_max_cq_size": 0, 00:17:10.908 "rdma_cm_event_timeout_ms": 0, 00:17:10.908 "dhchap_digests": [ 00:17:10.908 "sha256", 00:17:10.908 "sha384", 00:17:10.908 "sha512" 00:17:10.908 ], 00:17:10.908 "dhchap_dhgroups": [ 00:17:10.908 "null", 00:17:10.908 "ffdhe2048", 00:17:10.908 "ffdhe3072", 00:17:10.908 "ffdhe4096", 00:17:10.908 "ffdhe6144", 00:17:10.908 "ffdhe8192" 00:17:10.908 ] 00:17:10.908 } 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "method": "bdev_nvme_attach_controller", 00:17:10.908 "params": { 00:17:10.908 "name": "nvme0", 00:17:10.908 "trtype": "TCP", 00:17:10.908 "adrfam": "IPv4", 00:17:10.908 "traddr": "10.0.0.3", 00:17:10.908 "trsvcid": "4420", 00:17:10.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.908 "prchk_reftag": false, 00:17:10.908 "prchk_guard": false, 00:17:10.908 "ctrlr_loss_timeout_sec": 0, 00:17:10.908 "reconnect_delay_sec": 0, 00:17:10.908 "fast_io_fail_timeout_sec": 0, 00:17:10.908 "psk": "key0", 00:17:10.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.908 "hdgst": false, 00:17:10.908 "ddgst": false, 00:17:10.908 "multipath": "multipath" 00:17:10.908 } 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "method": "bdev_nvme_set_hotplug", 00:17:10.908 "params": { 00:17:10.908 "period_us": 100000, 00:17:10.908 "enable": false 00:17:10.908 } 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "method": "bdev_enable_histogram", 00:17:10.908 "params": { 00:17:10.908 "name": "nvme0n1", 00:17:10.908 "enable": true 00:17:10.908 } 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "method": "bdev_wait_for_examine" 00:17:10.908 } 00:17:10.908 ] 00:17:10.908 }, 00:17:10.908 { 00:17:10.908 "subsystem": "nbd", 00:17:10.908 "config": [] 00:17:10.908 } 00:17:10.908 ] 00:17:10.908 }' 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72323 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72323 ']' 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72323 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72323 00:17:10.908 killing process with pid 72323 00:17:10.908 Received shutdown signal, test time was about 1.000000 seconds 00:17:10.908 00:17:10.908 Latency(us) 00:17:10.908 [2024-11-04T17:18:11.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.908 [2024-11-04T17:18:11.712Z] =================================================================================================================== 00:17:10.908 [2024-11-04T17:18:11.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72323' 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72323 00:17:10.908 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72323 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72304 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72304 ']' 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72304 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72304 00:17:11.167 killing process with pid 72304 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72304' 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72304 00:17:11.167 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72304 00:17:11.426 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:11.426 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.426 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:11.426 "subsystems": [ 00:17:11.426 { 00:17:11.426 "subsystem": "keyring", 00:17:11.426 "config": [ 00:17:11.426 { 00:17:11.426 "method": "keyring_file_add_key", 00:17:11.426 "params": { 00:17:11.426 "name": "key0", 00:17:11.426 "path": "/tmp/tmp.6sFWvfQYsR" 00:17:11.426 } 00:17:11.426 } 00:17:11.426 ] 00:17:11.426 }, 00:17:11.426 { 00:17:11.426 "subsystem": "iobuf", 00:17:11.426 "config": [ 00:17:11.426 { 00:17:11.426 "method": "iobuf_set_options", 00:17:11.426 "params": { 00:17:11.426 "small_pool_count": 8192, 00:17:11.426 "large_pool_count": 1024, 00:17:11.426 "small_bufsize": 8192, 00:17:11.426 "large_bufsize": 135168, 00:17:11.426 "enable_numa": false 00:17:11.426 } 00:17:11.426 } 00:17:11.426 ] 00:17:11.426 }, 00:17:11.426 { 00:17:11.426 "subsystem": "sock", 00:17:11.426 "config": [ 00:17:11.426 { 00:17:11.426 "method": "sock_set_default_impl", 00:17:11.426 "params": { 00:17:11.426 "impl_name": "uring" 00:17:11.426 } 00:17:11.426 }, 00:17:11.426 { 00:17:11.426 "method": "sock_impl_set_options", 00:17:11.426 "params": { 00:17:11.426 "impl_name": "ssl", 00:17:11.426 "recv_buf_size": 4096, 00:17:11.426 "send_buf_size": 4096, 00:17:11.426 "enable_recv_pipe": true, 00:17:11.426 "enable_quickack": false, 00:17:11.426 "enable_placement_id": 0, 00:17:11.426 "enable_zerocopy_send_server": true, 00:17:11.426 "enable_zerocopy_send_client": false, 00:17:11.426 "zerocopy_threshold": 0, 00:17:11.426 "tls_version": 0, 00:17:11.426 "enable_ktls": false 00:17:11.426 } 00:17:11.426 }, 00:17:11.426 { 00:17:11.426 "method": "sock_impl_set_options", 00:17:11.426 "params": { 00:17:11.426 "impl_name": "posix", 00:17:11.426 "recv_buf_size": 2097152, 00:17:11.426 "send_buf_size": 2097152, 00:17:11.427 "enable_recv_pipe": true, 00:17:11.427 "enable_quickack": false, 00:17:11.427 "enable_placement_id": 0, 00:17:11.427 "enable_zerocopy_send_server": true, 00:17:11.427 "enable_zerocopy_send_client": false, 00:17:11.427 "zerocopy_threshold": 0, 00:17:11.427 "tls_version": 0, 00:17:11.427 "enable_ktls": false 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "sock_impl_set_options", 00:17:11.427 "params": { 00:17:11.427 "impl_name": "uring", 00:17:11.427 "recv_buf_size": 2097152, 00:17:11.427 "send_buf_size": 2097152, 00:17:11.427 "enable_recv_pipe": true, 00:17:11.427 "enable_quickack": false, 00:17:11.427 "enable_placement_id": 0, 00:17:11.427 "enable_zerocopy_send_server": false, 00:17:11.427 "enable_zerocopy_send_client": false, 00:17:11.427 "zerocopy_threshold": 0, 00:17:11.427 "tls_version": 0, 00:17:11.427 "enable_ktls": false 00:17:11.427 } 00:17:11.427 } 00:17:11.427 ] 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "subsystem": "vmd", 00:17:11.427 "config": [] 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "subsystem": "accel", 00:17:11.427 "config": [ 00:17:11.427 { 00:17:11.427 "method": "accel_set_options", 00:17:11.427 "params": { 00:17:11.427 "small_cache_size": 128, 00:17:11.427 "large_cache_size": 16, 00:17:11.427 "task_count": 2048, 00:17:11.427 "sequence_count": 2048, 00:17:11.427 "buf_count": 2048 00:17:11.427 } 00:17:11.427 } 00:17:11.427 ] 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "subsystem": "bdev", 00:17:11.427 "config": [ 00:17:11.427 { 00:17:11.427 "method": "bdev_set_options", 00:17:11.427 "params": { 00:17:11.427 "bdev_io_pool_size": 65535, 00:17:11.427 "bdev_io_cache_size": 256, 00:17:11.427 "bdev_auto_examine": true, 00:17:11.427 "iobuf_small_cache_size": 128, 00:17:11.427 "iobuf_large_cache_size": 16 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "bdev_raid_set_options", 00:17:11.427 "params": { 00:17:11.427 "process_window_size_kb": 1024, 00:17:11.427 "process_max_bandwidth_mb_sec": 0 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "bdev_iscsi_set_options", 00:17:11.427 "params": { 00:17:11.427 "timeout_sec": 30 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "bdev_nvme_set_options", 00:17:11.427 "params": { 00:17:11.427 "action_on_timeout": "none", 00:17:11.427 "timeout_us": 0, 00:17:11.427 "timeout_admin_us": 0, 00:17:11.427 "keep_alive_timeout_ms": 10000, 00:17:11.427 "arbitration_burst": 0, 00:17:11.427 "low_priority_weight": 0, 00:17:11.427 "medium_priority_weight": 0, 00:17:11.427 "high_priority_weight": 0, 00:17:11.427 "nvme_adminq_poll_period_us": 10000, 00:17:11.427 "nvme_ioq_poll_period_us": 0, 00:17:11.427 "io_queue_requests": 0, 00:17:11.427 "delay_cmd_submit": true, 00:17:11.427 "transport_retry_count": 4, 00:17:11.427 "bdev_retry_count": 3, 00:17:11.427 "transport_ack_timeout": 0, 00:17:11.427 "ctrlr_loss_timeout_sec": 0, 00:17:11.427 "reconnect_delay_sec": 0, 00:17:11.427 "fast_io_fail_timeout_sec": 0, 00:17:11.427 "disable_auto_failback": false, 00:17:11.427 "generate_uuids": false, 00:17:11.427 "transport_tos": 0, 00:17:11.427 "nvme_error_stat": false, 00:17:11.427 "rdma_srq_size": 0, 00:17:11.427 "io_path_stat": false, 00:17:11.427 "allow_accel_sequence": false, 00:17:11.427 "rdma_max_cq_size": 0, 00:17:11.427 "rdma_cm_event_timeout_ms": 0, 00:17:11.427 "dhchap_digests": [ 00:17:11.427 "sha256", 00:17:11.427 "sha384", 00:17:11.427 "sha512" 00:17:11.427 ], 00:17:11.427 "dhchap_dhgroups": [ 00:17:11.427 "null", 00:17:11.427 "ffdhe2048", 00:17:11.427 "ffdhe3072", 00:17:11.427 "ffdhe4096", 00:17:11.427 "ffdhe6144", 00:17:11.427 "ffdhe8192" 00:17:11.427 ] 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "bdev_nvme_set_hotplug", 00:17:11.427 "params": { 00:17:11.427 "period_us": 100000, 00:17:11.427 "enable": false 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "bdev_malloc_create", 00:17:11.427 "params": { 00:17:11.427 "name": "malloc0", 00:17:11.427 "num_blocks": 8192, 00:17:11.427 "block_size": 4096, 00:17:11.427 "physical_block_size": 4096, 00:17:11.427 "uuid": "93d7a76e-b1c7-4651-807c-bdba34d27b23", 00:17:11.427 "optimal_io_boundary": 0, 00:17:11.427 "md_size": 0, 00:17:11.427 "dif_type": 0, 00:17:11.427 "dif_is_head_of_md": false, 00:17:11.427 "dif_pi_format": 0 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "bdev_wait_for_examine" 00:17:11.427 } 00:17:11.427 ] 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "subsystem": "nbd", 00:17:11.427 "config": [] 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "subsystem": "scheduler", 00:17:11.427 "config": [ 00:17:11.427 { 00:17:11.427 "method": "framework_set_scheduler", 00:17:11.427 "params": { 00:17:11.427 "name": "static" 00:17:11.427 } 00:17:11.427 } 00:17:11.427 ] 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "subsystem": "nvmf", 00:17:11.427 "config": [ 00:17:11.427 { 00:17:11.427 "method": "nvmf_set_config", 00:17:11.427 "params": { 00:17:11.427 "discovery_filter": "match_any", 00:17:11.427 "admin_cmd_passthru": { 00:17:11.427 "identify_ctrlr": false 00:17:11.427 }, 00:17:11.427 "dhchap_digests": [ 00:17:11.427 "sha256", 00:17:11.427 "sha384", 00:17:11.427 "sha512" 00:17:11.427 ], 00:17:11.427 "dhchap_dhgroups": [ 00:17:11.427 "null", 00:17:11.427 "ffdhe2048", 00:17:11.427 "ffdhe3072", 00:17:11.427 "ffdhe4096", 00:17:11.427 "ffdhe6144", 00:17:11.427 "ffdhe8192" 00:17:11.427 ] 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "nvmf_set_max_subsyste 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:11.427 ms", 00:17:11.427 "params": { 00:17:11.427 "max_subsystems": 1024 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "nvmf_set_crdt", 00:17:11.427 "params": { 00:17:11.427 "crdt1": 0, 00:17:11.427 "crdt2": 0, 00:17:11.427 "crdt3": 0 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "nvmf_create_transport", 00:17:11.427 "params": { 00:17:11.427 "trtype": "TCP", 00:17:11.427 "max_queue_depth": 128, 00:17:11.427 "max_io_qpairs_per_ctrlr": 127, 00:17:11.427 "in_capsule_data_size": 4096, 00:17:11.427 "max_io_size": 131072, 00:17:11.427 "io_unit_size": 131072, 00:17:11.427 "max_aq_depth": 128, 00:17:11.427 "num_shared_buffers": 511, 00:17:11.427 "buf_cache_size": 4294967295, 00:17:11.427 "dif_insert_or_strip": false, 00:17:11.427 "zcopy": false, 00:17:11.427 "c2h_success": false, 00:17:11.427 "sock_priority": 0, 00:17:11.427 "abort_timeout_sec": 1, 00:17:11.427 "ack_timeout": 0, 00:17:11.427 "data_wr_pool_size": 0 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "nvmf_create_subsystem", 00:17:11.427 "params": { 00:17:11.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.427 "allow_any_host": false, 00:17:11.427 "serial_number": "00000000000000000000", 00:17:11.427 "model_number": "SPDK bdev Controller", 00:17:11.427 "max_namespaces": 32, 00:17:11.427 "min_cntlid": 1, 00:17:11.427 "max_cntlid": 65519, 00:17:11.427 "ana_reporting": false 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "nvmf_subsystem_add_host", 00:17:11.427 "params": { 00:17:11.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.427 "host": "nqn.2016-06.io.spdk:host1", 00:17:11.427 "psk": "key0" 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "nvmf_subsystem_add_ns", 00:17:11.427 "params": { 00:17:11.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.427 "namespace": { 00:17:11.427 "nsid": 1, 00:17:11.427 "bdev_name": "malloc0", 00:17:11.427 "nguid": "93D7A76EB1C74651807CBDBA34D27B23", 00:17:11.427 "uuid": "93d7a76e-b1c7-4651-807c-bdba34d27b23", 00:17:11.427 "no_auto_visible": false 00:17:11.427 } 00:17:11.427 } 00:17:11.427 }, 00:17:11.427 { 00:17:11.427 "method": "nvmf_subsystem_add_listener", 00:17:11.427 "params": { 00:17:11.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.427 "listen_address": { 00:17:11.427 "trtype": "TCP", 00:17:11.427 "adrfam": "IPv4", 00:17:11.427 "traddr": "10.0.0.3", 00:17:11.427 "trsvcid": "4420" 00:17:11.427 }, 00:17:11.427 "secure_channel": false, 00:17:11.427 "sock_impl": "ssl" 00:17:11.427 } 00:17:11.427 } 00:17:11.427 ] 00:17:11.427 } 00:17:11.427 ] 00:17:11.427 }' 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72376 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72376 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72376 ']' 00:17:11.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:11.427 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.428 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:11.428 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.428 [2024-11-04 17:18:12.096583] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:11.428 [2024-11-04 17:18:12.096653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.686 [2024-11-04 17:18:12.242252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.686 [2024-11-04 17:18:12.298457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.686 [2024-11-04 17:18:12.298506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.686 [2024-11-04 17:18:12.298518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.686 [2024-11-04 17:18:12.298527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.686 [2024-11-04 17:18:12.298534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.686 [2024-11-04 17:18:12.298991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.686 [2024-11-04 17:18:12.469618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:11.945 [2024-11-04 17:18:12.552483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.945 [2024-11-04 17:18:12.584437] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:11.945 [2024-11-04 17:18:12.584663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72408 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72408 /var/tmp/bdevperf.sock 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72408 ']' 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.513 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.514 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.514 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:12.514 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:12.514 "subsystems": [ 00:17:12.514 { 00:17:12.514 "subsystem": "keyring", 00:17:12.514 "config": [ 00:17:12.514 { 00:17:12.514 "method": "keyring_file_add_key", 00:17:12.514 "params": { 00:17:12.514 "name": "key0", 00:17:12.514 "path": "/tmp/tmp.6sFWvfQYsR" 00:17:12.514 } 00:17:12.514 } 00:17:12.514 ] 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "subsystem": "iobuf", 00:17:12.514 "config": [ 00:17:12.514 { 00:17:12.514 "method": "iobuf_set_options", 00:17:12.514 "params": { 00:17:12.514 "small_pool_count": 8192, 00:17:12.514 "large_pool_count": 1024, 00:17:12.514 "small_bufsize": 8192, 00:17:12.514 "large_bufsize": 135168, 00:17:12.514 "enable_numa": false 00:17:12.514 } 00:17:12.514 } 00:17:12.514 ] 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "subsystem": "sock", 00:17:12.514 "config": [ 00:17:12.514 { 00:17:12.514 "method": "sock_set_default_impl", 00:17:12.514 "params": { 00:17:12.514 "impl_name": "uring" 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "sock_impl_set_options", 00:17:12.514 "params": { 00:17:12.514 "impl_name": "ssl", 00:17:12.514 "recv_buf_size": 4096, 00:17:12.514 "send_buf_size": 4096, 00:17:12.514 "enable_recv_pipe": true, 00:17:12.514 "enable_quickack": false, 00:17:12.514 "enable_placement_id": 0, 00:17:12.514 "enable_zerocopy_send_server": true, 00:17:12.514 "enable_zerocopy_send_client": false, 00:17:12.514 "zerocopy_threshold": 0, 00:17:12.514 "tls_version": 0, 00:17:12.514 "enable_ktls": false 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "sock_impl_set_options", 00:17:12.514 "params": { 00:17:12.514 "impl_name": "posix", 00:17:12.514 "recv_buf_size": 2097152, 00:17:12.514 "send_buf_size": 2097152, 00:17:12.514 "enable_recv_pipe": true, 00:17:12.514 "enable_quickack": false, 00:17:12.514 "enable_placement_id": 0, 00:17:12.514 "enable_zerocopy_send_server": true, 00:17:12.514 "enable_zerocopy_send_client": false, 00:17:12.514 "zerocopy_threshold": 0, 00:17:12.514 "tls_version": 0, 00:17:12.514 "enable_ktls": false 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "sock_impl_set_options", 00:17:12.514 "params": { 00:17:12.514 "impl_name": "uring", 00:17:12.514 "recv_buf_size": 2097152, 00:17:12.514 "send_buf_size": 2097152, 00:17:12.514 "enable_recv_pipe": true, 00:17:12.514 "enable_quickack": false, 00:17:12.514 "enable_placement_id": 0, 00:17:12.514 "enable_zerocopy_send_server": false, 00:17:12.514 "enable_zerocopy_send_client": false, 00:17:12.514 "zerocopy_threshold": 0, 00:17:12.514 "tls_version": 0, 00:17:12.514 "enable_ktls": false 00:17:12.514 } 00:17:12.514 } 00:17:12.514 ] 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "subsystem": "vmd", 00:17:12.514 "config": [] 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "subsystem": "accel", 00:17:12.514 "config": [ 00:17:12.514 { 00:17:12.514 "method": "accel_set_options", 00:17:12.514 "params": { 00:17:12.514 "small_cache_size": 128, 00:17:12.514 "large_cache_size": 16, 00:17:12.514 "task_count": 2048, 00:17:12.514 "sequence_count": 2048, 00:17:12.514 "buf_count": 2048 00:17:12.514 } 00:17:12.514 } 00:17:12.514 ] 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "subsystem": "bdev", 00:17:12.514 "config": [ 00:17:12.514 { 00:17:12.514 "method": "bdev_set_options", 00:17:12.514 "params": { 00:17:12.514 "bdev_io_pool_size": 65535, 00:17:12.514 "bdev_io_cache_size": 256, 00:17:12.514 "bdev_auto_examine": true, 00:17:12.514 "iobuf_small_cache_size": 128, 00:17:12.514 "iobuf_large_cache_size": 16 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "bdev_raid_set_options", 00:17:12.514 "params": { 00:17:12.514 "process_window_size_kb": 1024, 00:17:12.514 "process_max_bandwidth_mb_sec": 0 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "bdev_iscsi_set_options", 00:17:12.514 "params": { 00:17:12.514 "timeout_sec": 30 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "bdev_nvme_set_options", 00:17:12.514 "params": { 00:17:12.514 "action_on_timeout": "none", 00:17:12.514 "timeout_us": 0, 00:17:12.514 "timeout_admin_us": 0, 00:17:12.514 "keep_alive_timeout_ms": 10000, 00:17:12.514 "arbitration_burst": 0, 00:17:12.514 "low_priority_weight": 0, 00:17:12.514 "medium_priority_weight": 0, 00:17:12.514 "high_priority_weight": 0, 00:17:12.514 "nvme_adminq_poll_period_us": 10000, 00:17:12.514 "nvme_ioq_poll_period_us": 0, 00:17:12.514 "io_queue_requests": 512, 00:17:12.514 "delay_cmd_submit": true, 00:17:12.514 "transport_retry_count": 4, 00:17:12.514 "bdev_retry_count": 3, 00:17:12.514 "transport_ack_timeout": 0, 00:17:12.514 "ctrlr_loss_timeout_sec": 0, 00:17:12.514 "reconnect_delay_sec": 0, 00:17:12.514 "fast_io_fail_timeout_sec": 0, 00:17:12.514 "disable_auto_failback": false, 00:17:12.514 "generate_uuids": false, 00:17:12.514 "transport_tos": 0, 00:17:12.514 "nvme_error_stat": false, 00:17:12.514 "rdma_srq_size": 0, 00:17:12.514 "io_path_stat": false, 00:17:12.514 "allow_accel_sequence": false, 00:17:12.514 "rdma_max_cq_size": 0, 00:17:12.514 "rdma_cm_event_timeout_ms": 0, 00:17:12.514 "dhchap_digests": [ 00:17:12.514 "sha256", 00:17:12.514 "sha384", 00:17:12.514 "sha512" 00:17:12.514 ], 00:17:12.514 "dhchap_dhgroups": [ 00:17:12.514 "null", 00:17:12.514 "ffdhe2048", 00:17:12.514 "ffdhe3072", 00:17:12.514 "ffdhe4096", 00:17:12.514 "ffdhe6144", 00:17:12.514 "ffdhe8192" 00:17:12.514 ] 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "bdev_nvme_attach_controller", 00:17:12.514 "params": { 00:17:12.514 "name": "nvme0", 00:17:12.514 "trtype": "TCP", 00:17:12.514 "adrfam": "IPv4", 00:17:12.514 "traddr": "10.0.0.3", 00:17:12.514 "trsvcid": "4420", 00:17:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.514 "prchk_reftag": false, 00:17:12.514 "prchk_guard": false, 00:17:12.514 "ctrlr_loss_timeout_sec": 0, 00:17:12.514 "reconnect_delay_sec": 0, 00:17:12.514 "fast_io_fail_timeout_sec": 0, 00:17:12.514 "psk": "key0", 00:17:12.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.514 "hdgst": false, 00:17:12.514 "ddgst": false, 00:17:12.514 "multipath": "multipath" 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "bdev_nvme_set_hotplug", 00:17:12.514 "params": { 00:17:12.514 "period_us": 100000, 00:17:12.514 "enable": false 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "bdev_enable_histogram", 00:17:12.514 "params": { 00:17:12.514 "name": "nvme0n1", 00:17:12.514 "enable": true 00:17:12.514 } 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "method": "bdev_wait_for_examine" 00:17:12.514 } 00:17:12.514 ] 00:17:12.514 }, 00:17:12.514 { 00:17:12.514 "subsystem": "nbd", 00:17:12.514 "config": [] 00:17:12.514 } 00:17:12.514 ] 00:17:12.514 }' 00:17:12.514 [2024-11-04 17:18:13.234340] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:12.514 [2024-11-04 17:18:13.234446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72408 ] 00:17:12.773 [2024-11-04 17:18:13.378292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.773 [2024-11-04 17:18:13.432414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.773 [2024-11-04 17:18:13.568033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:13.032 [2024-11-04 17:18:13.618299] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.600 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:13.600 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:13.600 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:13.600 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:13.877 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.877 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.877 Running I/O for 1 seconds... 00:17:15.253 4151.00 IOPS, 16.21 MiB/s 00:17:15.253 Latency(us) 00:17:15.253 [2024-11-04T17:18:16.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.253 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:15.253 Verification LBA range: start 0x0 length 0x2000 00:17:15.253 nvme0n1 : 1.02 4184.04 16.34 0.00 0.00 30162.50 6047.19 18826.71 00:17:15.253 [2024-11-04T17:18:16.057Z] =================================================================================================================== 00:17:15.253 [2024-11-04T17:18:16.057Z] Total : 4184.04 16.34 0.00 0.00 30162.50 6047.19 18826.71 00:17:15.253 { 00:17:15.253 "results": [ 00:17:15.253 { 00:17:15.253 "job": "nvme0n1", 00:17:15.253 "core_mask": "0x2", 00:17:15.253 "workload": "verify", 00:17:15.253 "status": "finished", 00:17:15.253 "verify_range": { 00:17:15.253 "start": 0, 00:17:15.253 "length": 8192 00:17:15.253 }, 00:17:15.253 "queue_depth": 128, 00:17:15.253 "io_size": 4096, 00:17:15.253 "runtime": 1.022695, 00:17:15.253 "iops": 4184.043140916891, 00:17:15.253 "mibps": 16.343918519206607, 00:17:15.253 "io_failed": 0, 00:17:15.253 "io_timeout": 0, 00:17:15.253 "avg_latency_us": 30162.49879963458, 00:17:15.253 "min_latency_us": 6047.185454545454, 00:17:15.253 "max_latency_us": 18826.705454545456 00:17:15.253 } 00:17:15.253 ], 00:17:15.253 "core_count": 1 00:17:15.253 } 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:15.253 nvmf_trace.0 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72408 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72408 ']' 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72408 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72408 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:15.253 killing process with pid 72408 00:17:15.253 Received shutdown signal, test time was about 1.000000 seconds 00:17:15.253 00:17:15.253 Latency(us) 00:17:15.253 [2024-11-04T17:18:16.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.253 [2024-11-04T17:18:16.057Z] =================================================================================================================== 00:17:15.253 [2024-11-04T17:18:16.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72408' 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72408 00:17:15.253 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72408 00:17:15.253 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:15.253 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:15.253 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:15.254 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:15.254 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:15.254 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:15.254 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:15.512 rmmod nvme_tcp 00:17:15.512 rmmod nvme_fabrics 00:17:15.512 rmmod nvme_keyring 00:17:15.512 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:15.512 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:15.512 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:15.512 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72376 ']' 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72376 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72376 ']' 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72376 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72376 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72376' 00:17:15.513 killing process with pid 72376 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72376 00:17:15.513 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72376 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:15.772 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.H9MDVO1fAi /tmp/tmp.FZl03bWcIu /tmp/tmp.6sFWvfQYsR 00:17:16.031 ************************************ 00:17:16.031 END TEST nvmf_tls 00:17:16.031 ************************************ 00:17:16.031 00:17:16.031 real 1m23.932s 00:17:16.031 user 2m15.780s 00:17:16.031 sys 0m27.176s 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.031 ************************************ 00:17:16.031 START TEST nvmf_fips 00:17:16.031 ************************************ 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:16.031 * Looking for test storage... 00:17:16.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:17:16.031 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:16.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.293 --rc genhtml_branch_coverage=1 00:17:16.293 --rc genhtml_function_coverage=1 00:17:16.293 --rc genhtml_legend=1 00:17:16.293 --rc geninfo_all_blocks=1 00:17:16.293 --rc geninfo_unexecuted_blocks=1 00:17:16.293 00:17:16.293 ' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:16.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.293 --rc genhtml_branch_coverage=1 00:17:16.293 --rc genhtml_function_coverage=1 00:17:16.293 --rc genhtml_legend=1 00:17:16.293 --rc geninfo_all_blocks=1 00:17:16.293 --rc geninfo_unexecuted_blocks=1 00:17:16.293 00:17:16.293 ' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:16.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.293 --rc genhtml_branch_coverage=1 00:17:16.293 --rc genhtml_function_coverage=1 00:17:16.293 --rc genhtml_legend=1 00:17:16.293 --rc geninfo_all_blocks=1 00:17:16.293 --rc geninfo_unexecuted_blocks=1 00:17:16.293 00:17:16.293 ' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:16.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.293 --rc genhtml_branch_coverage=1 00:17:16.293 --rc genhtml_function_coverage=1 00:17:16.293 --rc genhtml_legend=1 00:17:16.293 --rc geninfo_all_blocks=1 00:17:16.293 --rc geninfo_unexecuted_blocks=1 00:17:16.293 00:17:16.293 ' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.293 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:16.294 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:16.294 Error setting digest 00:17:16.294 40A2D097E37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:16.294 40A2D097E37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:16.294 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.295 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:16.554 Cannot find device "nvmf_init_br" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:16.554 Cannot find device "nvmf_init_br2" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:16.554 Cannot find device "nvmf_tgt_br" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.554 Cannot find device "nvmf_tgt_br2" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:16.554 Cannot find device "nvmf_init_br" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:16.554 Cannot find device "nvmf_init_br2" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:16.554 Cannot find device "nvmf_tgt_br" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:16.554 Cannot find device "nvmf_tgt_br2" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:16.554 Cannot find device "nvmf_br" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:16.554 Cannot find device "nvmf_init_if" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:16.554 Cannot find device "nvmf_init_if2" 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:16.554 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:16.813 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:16.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:16.814 00:17:16.814 --- 10.0.0.3 ping statistics --- 00:17:16.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.814 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:16.814 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:16.814 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:17:16.814 00:17:16.814 --- 10.0.0.4 ping statistics --- 00:17:16.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.814 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:16.814 00:17:16.814 --- 10.0.0.1 ping statistics --- 00:17:16.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.814 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:16.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:17:16.814 00:17:16.814 --- 10.0.0.2 ping statistics --- 00:17:16.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.814 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72735 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72735 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72735 ']' 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:16.814 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:16.814 [2024-11-04 17:18:17.575819] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:16.814 [2024-11-04 17:18:17.576562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.073 [2024-11-04 17:18:17.728962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.073 [2024-11-04 17:18:17.789756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.073 [2024-11-04 17:18:17.789815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.073 [2024-11-04 17:18:17.789846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.073 [2024-11-04 17:18:17.789861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.073 [2024-11-04 17:18:17.789870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.073 [2024-11-04 17:18:17.790371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.073 [2024-11-04 17:18:17.849151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:18.032 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:18.032 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:17:18.032 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:18.032 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:18.032 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.kY9 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.kY9 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.kY9 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.kY9 00:17:18.033 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:18.291 [2024-11-04 17:18:18.981760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.291 [2024-11-04 17:18:18.997695] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:18.291 [2024-11-04 17:18:18.997877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:18.291 malloc0 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72771 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72771 /var/tmp/bdevperf.sock 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72771 ']' 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:18.291 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:18.550 [2024-11-04 17:18:19.154912] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:18.550 [2024-11-04 17:18:19.155361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72771 ] 00:17:18.550 [2024-11-04 17:18:19.310395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.808 [2024-11-04 17:18:19.385654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.808 [2024-11-04 17:18:19.444491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:19.375 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:19.375 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:17:19.375 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.kY9 00:17:19.634 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:19.894 [2024-11-04 17:18:20.630860] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.154 TLSTESTn1 00:17:20.154 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:20.154 Running I/O for 10 seconds... 00:17:22.027 4096.00 IOPS, 16.00 MiB/s [2024-11-04T17:18:24.207Z] 4281.00 IOPS, 16.72 MiB/s [2024-11-04T17:18:25.143Z] 4314.33 IOPS, 16.85 MiB/s [2024-11-04T17:18:26.078Z] 4345.25 IOPS, 16.97 MiB/s [2024-11-04T17:18:27.013Z] 4362.00 IOPS, 17.04 MiB/s [2024-11-04T17:18:27.957Z] 4377.00 IOPS, 17.10 MiB/s [2024-11-04T17:18:28.911Z] 4350.86 IOPS, 17.00 MiB/s [2024-11-04T17:18:29.847Z] 4340.75 IOPS, 16.96 MiB/s [2024-11-04T17:18:31.220Z] 4241.67 IOPS, 16.57 MiB/s [2024-11-04T17:18:31.220Z] 4099.50 IOPS, 16.01 MiB/s 00:17:30.416 Latency(us) 00:17:30.416 [2024-11-04T17:18:31.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.416 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:30.416 Verification LBA range: start 0x0 length 0x2000 00:17:30.416 TLSTESTn1 : 10.03 4098.80 16.01 0.00 0.00 31149.37 7387.69 29193.31 00:17:30.416 [2024-11-04T17:18:31.220Z] =================================================================================================================== 00:17:30.416 [2024-11-04T17:18:31.220Z] Total : 4098.80 16.01 0.00 0.00 31149.37 7387.69 29193.31 00:17:30.416 { 00:17:30.416 "results": [ 00:17:30.416 { 00:17:30.416 "job": "TLSTESTn1", 00:17:30.416 "core_mask": "0x4", 00:17:30.416 "workload": "verify", 00:17:30.416 "status": "finished", 00:17:30.416 "verify_range": { 00:17:30.416 "start": 0, 00:17:30.416 "length": 8192 00:17:30.416 }, 00:17:30.416 "queue_depth": 128, 00:17:30.416 "io_size": 4096, 00:17:30.416 "runtime": 10.032931, 00:17:30.416 "iops": 4098.802234362022, 00:17:30.416 "mibps": 16.01094622797665, 00:17:30.416 "io_failed": 0, 00:17:30.416 "io_timeout": 0, 00:17:30.416 "avg_latency_us": 31149.370052503247, 00:17:30.416 "min_latency_us": 7387.694545454546, 00:17:30.417 "max_latency_us": 29193.30909090909 00:17:30.417 } 00:17:30.417 ], 00:17:30.417 "core_count": 1 00:17:30.417 } 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:30.417 nvmf_trace.0 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72771 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72771 ']' 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72771 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:30.417 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72771 00:17:30.417 killing process with pid 72771 00:17:30.417 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.417 00:17:30.417 Latency(us) 00:17:30.417 [2024-11-04T17:18:31.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.417 [2024-11-04T17:18:31.221Z] =================================================================================================================== 00:17:30.417 [2024-11-04T17:18:31.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72771' 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72771 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72771 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.417 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.675 rmmod nvme_tcp 00:17:30.675 rmmod nvme_fabrics 00:17:30.675 rmmod nvme_keyring 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72735 ']' 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72735 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72735 ']' 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72735 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72735 00:17:30.675 killing process with pid 72735 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72735' 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72735 00:17:30.675 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72735 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:30.933 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.kY9 00:17:31.191 00:17:31.191 real 0m15.216s 00:17:31.191 user 0m21.069s 00:17:31.191 sys 0m5.932s 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:31.191 ************************************ 00:17:31.191 END TEST nvmf_fips 00:17:31.191 ************************************ 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:31.191 17:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.191 ************************************ 00:17:31.191 START TEST nvmf_control_msg_list 00:17:31.191 ************************************ 00:17:31.192 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:31.450 * Looking for test storage... 00:17:31.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.450 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:31.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.450 --rc genhtml_branch_coverage=1 00:17:31.450 --rc genhtml_function_coverage=1 00:17:31.450 --rc genhtml_legend=1 00:17:31.450 --rc geninfo_all_blocks=1 00:17:31.450 --rc geninfo_unexecuted_blocks=1 00:17:31.450 00:17:31.450 ' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:31.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.451 --rc genhtml_branch_coverage=1 00:17:31.451 --rc genhtml_function_coverage=1 00:17:31.451 --rc genhtml_legend=1 00:17:31.451 --rc geninfo_all_blocks=1 00:17:31.451 --rc geninfo_unexecuted_blocks=1 00:17:31.451 00:17:31.451 ' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:31.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.451 --rc genhtml_branch_coverage=1 00:17:31.451 --rc genhtml_function_coverage=1 00:17:31.451 --rc genhtml_legend=1 00:17:31.451 --rc geninfo_all_blocks=1 00:17:31.451 --rc geninfo_unexecuted_blocks=1 00:17:31.451 00:17:31.451 ' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:31.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.451 --rc genhtml_branch_coverage=1 00:17:31.451 --rc genhtml_function_coverage=1 00:17:31.451 --rc genhtml_legend=1 00:17:31.451 --rc geninfo_all_blocks=1 00:17:31.451 --rc geninfo_unexecuted_blocks=1 00:17:31.451 00:17:31.451 ' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:31.451 Cannot find device "nvmf_init_br" 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:31.451 Cannot find device "nvmf_init_br2" 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:31.451 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:31.451 Cannot find device "nvmf_tgt_br" 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.452 Cannot find device "nvmf_tgt_br2" 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:31.452 Cannot find device "nvmf_init_br" 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:31.452 Cannot find device "nvmf_init_br2" 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:31.452 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:31.709 Cannot find device "nvmf_tgt_br" 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:31.709 Cannot find device "nvmf_tgt_br2" 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:31.709 Cannot find device "nvmf_br" 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:31.709 Cannot find device "nvmf_init_if" 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:31.709 Cannot find device "nvmf_init_if2" 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:31.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.709 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:31.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:31.710 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:31.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:17:31.968 00:17:31.968 --- 10.0.0.3 ping statistics --- 00:17:31.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.968 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:31.968 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:31.968 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:17:31.968 00:17:31.968 --- 10.0.0.4 ping statistics --- 00:17:31.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.968 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:31.968 00:17:31.968 --- 10.0.0.1 ping statistics --- 00:17:31.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.968 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:31.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:17:31.968 00:17:31.968 --- 10.0.0.2 ping statistics --- 00:17:31.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.968 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73167 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73167 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 73167 ']' 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:31.968 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:31.968 [2024-11-04 17:18:32.628846] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:31.968 [2024-11-04 17:18:32.628952] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.232 [2024-11-04 17:18:32.785751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.232 [2024-11-04 17:18:32.849184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.232 [2024-11-04 17:18:32.849259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.232 [2024-11-04 17:18:32.849280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.232 [2024-11-04 17:18:32.849300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.232 [2024-11-04 17:18:32.849310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.232 [2024-11-04 17:18:32.849851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.232 [2024-11-04 17:18:32.913406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.232 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:32.232 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:17:32.232 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.232 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.232 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.232 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.233 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:32.233 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:32.233 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:32.233 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.233 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.233 [2024-11-04 17:18:33.032955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.491 Malloc0 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.491 [2024-11-04 17:18:33.074353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73192 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73193 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73194 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:32.491 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73192 00:17:32.491 [2024-11-04 17:18:33.262629] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:32.491 [2024-11-04 17:18:33.272902] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:32.491 [2024-11-04 17:18:33.282870] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:33.862 Initializing NVMe Controllers 00:17:33.862 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:33.862 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:33.862 Initialization complete. Launching workers. 00:17:33.862 ======================================================== 00:17:33.862 Latency(us) 00:17:33.862 Device Information : IOPS MiB/s Average min max 00:17:33.862 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3225.00 12.60 309.86 132.79 1110.48 00:17:33.862 ======================================================== 00:17:33.862 Total : 3225.00 12.60 309.86 132.79 1110.48 00:17:33.862 00:17:33.862 Initializing NVMe Controllers 00:17:33.862 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:33.862 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:33.862 Initialization complete. Launching workers. 00:17:33.862 ======================================================== 00:17:33.862 Latency(us) 00:17:33.862 Device Information : IOPS MiB/s Average min max 00:17:33.862 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3209.00 12.54 311.26 203.23 1426.26 00:17:33.863 ======================================================== 00:17:33.863 Total : 3209.00 12.54 311.26 203.23 1426.26 00:17:33.863 00:17:33.863 Initializing NVMe Controllers 00:17:33.863 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:33.863 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:33.863 Initialization complete. Launching workers. 00:17:33.863 ======================================================== 00:17:33.863 Latency(us) 00:17:33.863 Device Information : IOPS MiB/s Average min max 00:17:33.863 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3213.99 12.55 310.80 179.22 1177.67 00:17:33.863 ======================================================== 00:17:33.863 Total : 3213.99 12.55 310.80 179.22 1177.67 00:17:33.863 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73193 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73194 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.863 rmmod nvme_tcp 00:17:33.863 rmmod nvme_fabrics 00:17:33.863 rmmod nvme_keyring 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73167 ']' 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73167 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 73167 ']' 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 73167 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73167 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.863 killing process with pid 73167 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73167' 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 73167 00:17:33.863 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 73167 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:34.121 00:17:34.121 real 0m2.959s 00:17:34.121 user 0m4.867s 00:17:34.121 sys 0m1.366s 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:34.121 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:34.121 ************************************ 00:17:34.121 END TEST nvmf_control_msg_list 00:17:34.121 ************************************ 00:17:34.381 17:18:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:34.381 17:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:34.381 17:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:34.381 17:18:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.381 ************************************ 00:17:34.381 START TEST nvmf_wait_for_buf 00:17:34.381 ************************************ 00:17:34.381 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:34.381 * Looking for test storage... 00:17:34.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:34.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.381 --rc genhtml_branch_coverage=1 00:17:34.381 --rc genhtml_function_coverage=1 00:17:34.381 --rc genhtml_legend=1 00:17:34.381 --rc geninfo_all_blocks=1 00:17:34.381 --rc geninfo_unexecuted_blocks=1 00:17:34.381 00:17:34.381 ' 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:34.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.381 --rc genhtml_branch_coverage=1 00:17:34.381 --rc genhtml_function_coverage=1 00:17:34.381 --rc genhtml_legend=1 00:17:34.381 --rc geninfo_all_blocks=1 00:17:34.381 --rc geninfo_unexecuted_blocks=1 00:17:34.381 00:17:34.381 ' 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:34.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.381 --rc genhtml_branch_coverage=1 00:17:34.381 --rc genhtml_function_coverage=1 00:17:34.381 --rc genhtml_legend=1 00:17:34.381 --rc geninfo_all_blocks=1 00:17:34.381 --rc geninfo_unexecuted_blocks=1 00:17:34.381 00:17:34.381 ' 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:34.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.381 --rc genhtml_branch_coverage=1 00:17:34.381 --rc genhtml_function_coverage=1 00:17:34.381 --rc genhtml_legend=1 00:17:34.381 --rc geninfo_all_blocks=1 00:17:34.381 --rc geninfo_unexecuted_blocks=1 00:17:34.381 00:17:34.381 ' 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.381 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.382 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.382 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:34.641 Cannot find device "nvmf_init_br" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:34.641 Cannot find device "nvmf_init_br2" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:34.641 Cannot find device "nvmf_tgt_br" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.641 Cannot find device "nvmf_tgt_br2" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:34.641 Cannot find device "nvmf_init_br" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:34.641 Cannot find device "nvmf_init_br2" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:34.641 Cannot find device "nvmf_tgt_br" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:34.641 Cannot find device "nvmf_tgt_br2" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:34.641 Cannot find device "nvmf_br" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:34.641 Cannot find device "nvmf_init_if" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:34.641 Cannot find device "nvmf_init_if2" 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.641 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:34.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:17:34.901 00:17:34.901 --- 10.0.0.3 ping statistics --- 00:17:34.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.901 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:34.901 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:34.901 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:17:34.901 00:17:34.901 --- 10.0.0.4 ping statistics --- 00:17:34.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.901 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:17:34.901 00:17:34.901 --- 10.0.0.1 ping statistics --- 00:17:34.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.901 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:34.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:34.901 00:17:34.901 --- 10.0.0.2 ping statistics --- 00:17:34.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.901 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73423 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73423 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73423 ']' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:34.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:34.901 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:34.901 [2024-11-04 17:18:35.670884] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:34.901 [2024-11-04 17:18:35.671548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.159 [2024-11-04 17:18:35.825801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.159 [2024-11-04 17:18:35.887307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.159 [2024-11-04 17:18:35.887363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.159 [2024-11-04 17:18:35.887377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.159 [2024-11-04 17:18:35.887388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.159 [2024-11-04 17:18:35.887397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.159 [2024-11-04 17:18:35.887850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.159 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.159 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:17:35.159 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.159 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:35.159 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 [2024-11-04 17:18:36.046992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 Malloc0 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 [2024-11-04 17:18:36.118767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:35.417 [2024-11-04 17:18:36.142870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.417 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:35.675 [2024-11-04 17:18:36.341401] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:37.050 Initializing NVMe Controllers 00:17:37.050 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:37.050 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:37.050 Initialization complete. Launching workers. 00:17:37.050 ======================================================== 00:17:37.050 Latency(us) 00:17:37.050 Device Information : IOPS MiB/s Average min max 00:17:37.050 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 487.06 60.88 8213.28 5046.98 15877.68 00:17:37.050 ======================================================== 00:17:37.050 Total : 487.06 60.88 8213.28 5046.98 15877.68 00:17:37.050 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4636 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4636 -eq 0 ]] 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.050 rmmod nvme_tcp 00:17:37.050 rmmod nvme_fabrics 00:17:37.050 rmmod nvme_keyring 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73423 ']' 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73423 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73423 ']' 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73423 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73423 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:37.050 killing process with pid 73423 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73423' 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73423 00:17:37.050 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73423 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:37.308 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.308 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:37.566 00:17:37.566 real 0m3.254s 00:17:37.566 user 0m2.574s 00:17:37.566 sys 0m0.828s 00:17:37.566 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:37.566 ************************************ 00:17:37.566 END TEST nvmf_wait_for_buf 00:17:37.566 ************************************ 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.567 ************************************ 00:17:37.567 START TEST nvmf_nsid 00:17:37.567 ************************************ 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:37.567 * Looking for test storage... 00:17:37.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:37.567 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.826 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.827 --rc genhtml_branch_coverage=1 00:17:37.827 --rc genhtml_function_coverage=1 00:17:37.827 --rc genhtml_legend=1 00:17:37.827 --rc geninfo_all_blocks=1 00:17:37.827 --rc geninfo_unexecuted_blocks=1 00:17:37.827 00:17:37.827 ' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.827 --rc genhtml_branch_coverage=1 00:17:37.827 --rc genhtml_function_coverage=1 00:17:37.827 --rc genhtml_legend=1 00:17:37.827 --rc geninfo_all_blocks=1 00:17:37.827 --rc geninfo_unexecuted_blocks=1 00:17:37.827 00:17:37.827 ' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.827 --rc genhtml_branch_coverage=1 00:17:37.827 --rc genhtml_function_coverage=1 00:17:37.827 --rc genhtml_legend=1 00:17:37.827 --rc geninfo_all_blocks=1 00:17:37.827 --rc geninfo_unexecuted_blocks=1 00:17:37.827 00:17:37.827 ' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.827 --rc genhtml_branch_coverage=1 00:17:37.827 --rc genhtml_function_coverage=1 00:17:37.827 --rc genhtml_legend=1 00:17:37.827 --rc geninfo_all_blocks=1 00:17:37.827 --rc geninfo_unexecuted_blocks=1 00:17:37.827 00:17:37.827 ' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:37.827 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:37.828 Cannot find device "nvmf_init_br" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:37.828 Cannot find device "nvmf_init_br2" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:37.828 Cannot find device "nvmf_tgt_br" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.828 Cannot find device "nvmf_tgt_br2" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:37.828 Cannot find device "nvmf_init_br" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:37.828 Cannot find device "nvmf_init_br2" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:37.828 Cannot find device "nvmf_tgt_br" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:37.828 Cannot find device "nvmf_tgt_br2" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:37.828 Cannot find device "nvmf_br" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:37.828 Cannot find device "nvmf_init_if" 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:37.828 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:38.086 Cannot find device "nvmf_init_if2" 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.086 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.087 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:38.087 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:38.087 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:38.087 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.087 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:38.345 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:38.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:17:38.345 00:17:38.345 --- 10.0.0.3 ping statistics --- 00:17:38.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.345 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:38.345 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:38.345 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:38.345 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:38.345 00:17:38.345 --- 10.0.0.4 ping statistics --- 00:17:38.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.345 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:38.345 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:38.345 00:17:38.345 --- 10.0.0.1 ping statistics --- 00:17:38.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.345 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:38.345 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:38.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:17:38.345 00:17:38.346 --- 10.0.0.2 ping statistics --- 00:17:38.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.346 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73699 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73699 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73699 ']' 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:38.346 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:38.346 [2024-11-04 17:18:38.990141] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:38.346 [2024-11-04 17:18:38.990763] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.346 [2024-11-04 17:18:39.133372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.605 [2024-11-04 17:18:39.185099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.605 [2024-11-04 17:18:39.185143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.605 [2024-11-04 17:18:39.185169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.605 [2024-11-04 17:18:39.185177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.605 [2024-11-04 17:18:39.185183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.605 [2024-11-04 17:18:39.185646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.605 [2024-11-04 17:18:39.242163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73719 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2606b5fa-c87a-4aed-97da-d96728f83efa 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=40a67186-67dc-4e28-8f51-0a7deb421e07 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=de4a3425-c963-4542-83c5-fd3f629ee2bf 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.605 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:38.605 null0 00:17:38.605 null1 00:17:38.865 null2 00:17:38.865 [2024-11-04 17:18:39.411841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.865 [2024-11-04 17:18:39.427754] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:38.865 [2024-11-04 17:18:39.427859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73719 ] 00:17:38.865 [2024-11-04 17:18:39.435925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73719 /var/tmp/tgt2.sock 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73719 ']' 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:38.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:38.865 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:38.865 [2024-11-04 17:18:39.582028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.865 [2024-11-04 17:18:39.651928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.123 [2024-11-04 17:18:39.729093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.381 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.381 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:17:39.381 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:39.640 [2024-11-04 17:18:40.358724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.640 [2024-11-04 17:18:40.374880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:39.640 nvme0n1 nvme0n2 00:17:39.640 nvme1n1 00:17:39.640 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:39.640 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:39.640 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:17:39.899 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2606b5fa-c87a-4aed-97da-d96728f83efa 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:40.835 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:41.094 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2606b5fac87a4aed97dad96728f83efa 00:17:41.094 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2606B5FAC87A4AED97DAD96728F83EFA 00:17:41.094 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2606B5FAC87A4AED97DAD96728F83EFA == \2\6\0\6\B\5\F\A\C\8\7\A\4\A\E\D\9\7\D\A\D\9\6\7\2\8\F\8\3\E\F\A ]] 00:17:41.094 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:41.094 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:17:41.094 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:41.094 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 40a67186-67dc-4e28-8f51-0a7deb421e07 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=40a6718667dc4e288f510a7deb421e07 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 40A6718667DC4E288F510A7DEB421E07 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 40A6718667DC4E288F510A7DEB421E07 == \4\0\A\6\7\1\8\6\6\7\D\C\4\E\2\8\8\F\5\1\0\A\7\D\E\B\4\2\1\E\0\7 ]] 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid de4a3425-c963-4542-83c5-fd3f629ee2bf 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=de4a3425c963454283c5fd3f629ee2bf 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DE4A3425C963454283C5FD3F629EE2BF 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DE4A3425C963454283C5FD3F629EE2BF == \D\E\4\A\3\4\2\5\C\9\6\3\4\5\4\2\8\3\C\5\F\D\3\F\6\2\9\E\E\2\B\F ]] 00:17:41.095 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:41.354 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:41.354 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:41.354 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73719 00:17:41.354 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73719 ']' 00:17:41.354 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73719 00:17:41.354 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:17:41.355 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:41.355 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73719 00:17:41.355 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:41.355 killing process with pid 73719 00:17:41.355 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:41.355 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73719' 00:17:41.355 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73719 00:17:41.355 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73719 00:17:41.614 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:41.614 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.614 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.873 rmmod nvme_tcp 00:17:41.873 rmmod nvme_fabrics 00:17:41.873 rmmod nvme_keyring 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73699 ']' 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73699 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73699 ']' 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73699 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73699 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:41.873 killing process with pid 73699 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73699' 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73699 00:17:41.873 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73699 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:42.131 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:42.389 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:42.389 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.389 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:42.389 00:17:42.389 real 0m4.761s 00:17:42.389 user 0m6.946s 00:17:42.389 sys 0m1.765s 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:42.389 ************************************ 00:17:42.389 END TEST nvmf_nsid 00:17:42.389 ************************************ 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:42.389 00:17:42.389 real 5m6.517s 00:17:42.389 user 10m41.980s 00:17:42.389 sys 1m9.318s 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:42.389 ************************************ 00:17:42.389 END TEST nvmf_target_extra 00:17:42.389 17:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.389 ************************************ 00:17:42.389 17:18:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:42.389 17:18:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:42.389 17:18:43 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:42.389 17:18:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.389 ************************************ 00:17:42.389 START TEST nvmf_host 00:17:42.389 ************************************ 00:17:42.389 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:42.649 * Looking for test storage... 00:17:42.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.649 --rc genhtml_branch_coverage=1 00:17:42.649 --rc genhtml_function_coverage=1 00:17:42.649 --rc genhtml_legend=1 00:17:42.649 --rc geninfo_all_blocks=1 00:17:42.649 --rc geninfo_unexecuted_blocks=1 00:17:42.649 00:17:42.649 ' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.649 --rc genhtml_branch_coverage=1 00:17:42.649 --rc genhtml_function_coverage=1 00:17:42.649 --rc genhtml_legend=1 00:17:42.649 --rc geninfo_all_blocks=1 00:17:42.649 --rc geninfo_unexecuted_blocks=1 00:17:42.649 00:17:42.649 ' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.649 --rc genhtml_branch_coverage=1 00:17:42.649 --rc genhtml_function_coverage=1 00:17:42.649 --rc genhtml_legend=1 00:17:42.649 --rc geninfo_all_blocks=1 00:17:42.649 --rc geninfo_unexecuted_blocks=1 00:17:42.649 00:17:42.649 ' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:42.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.649 --rc genhtml_branch_coverage=1 00:17:42.649 --rc genhtml_function_coverage=1 00:17:42.649 --rc genhtml_legend=1 00:17:42.649 --rc geninfo_all_blocks=1 00:17:42.649 --rc geninfo_unexecuted_blocks=1 00:17:42.649 00:17:42.649 ' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.649 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.650 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.650 ************************************ 00:17:42.650 START TEST nvmf_identify 00:17:42.650 ************************************ 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:42.650 * Looking for test storage... 00:17:42.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:17:42.650 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:42.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.910 --rc genhtml_branch_coverage=1 00:17:42.910 --rc genhtml_function_coverage=1 00:17:42.910 --rc genhtml_legend=1 00:17:42.910 --rc geninfo_all_blocks=1 00:17:42.910 --rc geninfo_unexecuted_blocks=1 00:17:42.910 00:17:42.910 ' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:42.910 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.911 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:42.911 Cannot find device "nvmf_init_br" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:42.911 Cannot find device "nvmf_init_br2" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:42.911 Cannot find device "nvmf_tgt_br" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.911 Cannot find device "nvmf_tgt_br2" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:42.911 Cannot find device "nvmf_init_br" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:42.911 Cannot find device "nvmf_init_br2" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:42.911 Cannot find device "nvmf_tgt_br" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:42.911 Cannot find device "nvmf_tgt_br2" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:42.911 Cannot find device "nvmf_br" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:42.911 Cannot find device "nvmf_init_if" 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:42.911 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:43.186 Cannot find device "nvmf_init_if2" 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.186 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:43.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:17:43.455 00:17:43.455 --- 10.0.0.3 ping statistics --- 00:17:43.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.455 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:43.455 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:43.455 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:17:43.455 00:17:43.455 --- 10.0.0.4 ping statistics --- 00:17:43.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.455 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:43.455 00:17:43.455 --- 10.0.0.1 ping statistics --- 00:17:43.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.455 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:43.455 17:18:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:43.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:17:43.455 00:17:43.455 --- 10.0.0.2 ping statistics --- 00:17:43.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.455 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74079 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74079 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 74079 ']' 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.455 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:43.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.456 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.456 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:43.456 17:18:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:43.456 [2024-11-04 17:18:44.123505] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:43.456 [2024-11-04 17:18:44.123652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.715 [2024-11-04 17:18:44.276688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.715 [2024-11-04 17:18:44.362674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.715 [2024-11-04 17:18:44.362768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.715 [2024-11-04 17:18:44.362800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.715 [2024-11-04 17:18:44.362818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.715 [2024-11-04 17:18:44.362833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.715 [2024-11-04 17:18:44.364403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.715 [2024-11-04 17:18:44.364502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.715 [2024-11-04 17:18:44.364652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.715 [2024-11-04 17:18:44.364665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.716 [2024-11-04 17:18:44.428017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 [2024-11-04 17:18:45.210572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 Malloc0 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 [2024-11-04 17:18:45.320768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:44.654 [ 00:17:44.654 { 00:17:44.654 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:44.654 "subtype": "Discovery", 00:17:44.654 "listen_addresses": [ 00:17:44.654 { 00:17:44.654 "trtype": "TCP", 00:17:44.654 "adrfam": "IPv4", 00:17:44.654 "traddr": "10.0.0.3", 00:17:44.654 "trsvcid": "4420" 00:17:44.654 } 00:17:44.654 ], 00:17:44.654 "allow_any_host": true, 00:17:44.654 "hosts": [] 00:17:44.654 }, 00:17:44.654 { 00:17:44.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.654 "subtype": "NVMe", 00:17:44.654 "listen_addresses": [ 00:17:44.654 { 00:17:44.654 "trtype": "TCP", 00:17:44.654 "adrfam": "IPv4", 00:17:44.654 "traddr": "10.0.0.3", 00:17:44.654 "trsvcid": "4420" 00:17:44.654 } 00:17:44.654 ], 00:17:44.654 "allow_any_host": true, 00:17:44.654 "hosts": [], 00:17:44.654 "serial_number": "SPDK00000000000001", 00:17:44.654 "model_number": "SPDK bdev Controller", 00:17:44.654 "max_namespaces": 32, 00:17:44.654 "min_cntlid": 1, 00:17:44.654 "max_cntlid": 65519, 00:17:44.654 "namespaces": [ 00:17:44.654 { 00:17:44.654 "nsid": 1, 00:17:44.654 "bdev_name": "Malloc0", 00:17:44.654 "name": "Malloc0", 00:17:44.654 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:44.654 "eui64": "ABCDEF0123456789", 00:17:44.654 "uuid": "3c4bb496-ace1-4561-acf5-a73480081da0" 00:17:44.654 } 00:17:44.654 ] 00:17:44.654 } 00:17:44.654 ] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.654 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:44.654 [2024-11-04 17:18:45.378203] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:44.654 [2024-11-04 17:18:45.378283] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74114 ] 00:17:44.916 [2024-11-04 17:18:45.537408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:44.916 [2024-11-04 17:18:45.537488] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:44.916 [2024-11-04 17:18:45.537495] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:44.916 [2024-11-04 17:18:45.537508] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:44.916 [2024-11-04 17:18:45.537519] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:44.917 [2024-11-04 17:18:45.537887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:44.917 [2024-11-04 17:18:45.537968] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15f1750 0 00:17:44.917 [2024-11-04 17:18:45.543242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:44.917 [2024-11-04 17:18:45.543288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:44.917 [2024-11-04 17:18:45.543296] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:44.917 [2024-11-04 17:18:45.543300] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:44.917 [2024-11-04 17:18:45.543333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.543340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.543345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.543359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:44.917 [2024-11-04 17:18:45.543394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.550286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.917 [2024-11-04 17:18:45.550320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.917 [2024-11-04 17:18:45.550343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.917 [2024-11-04 17:18:45.550365] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:44.917 [2024-11-04 17:18:45.550374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:44.917 [2024-11-04 17:18:45.550397] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:44.917 [2024-11-04 17:18:45.550414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.550433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.917 [2024-11-04 17:18:45.550462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.550567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.917 [2024-11-04 17:18:45.550575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.917 [2024-11-04 17:18:45.550578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.917 [2024-11-04 17:18:45.550589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:44.917 [2024-11-04 17:18:45.550597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:44.917 [2024-11-04 17:18:45.550621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.550637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.917 [2024-11-04 17:18:45.550656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.550706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.917 [2024-11-04 17:18:45.550713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.917 [2024-11-04 17:18:45.550717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.917 [2024-11-04 17:18:45.550727] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:44.917 [2024-11-04 17:18:45.550736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:44.917 [2024-11-04 17:18:45.550744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.550759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.917 [2024-11-04 17:18:45.550778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.550824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.917 [2024-11-04 17:18:45.550831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.917 [2024-11-04 17:18:45.550840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.917 [2024-11-04 17:18:45.550851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:44.917 [2024-11-04 17:18:45.550861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.550877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.917 [2024-11-04 17:18:45.550894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.550938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.917 [2024-11-04 17:18:45.550945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.917 [2024-11-04 17:18:45.550949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.550953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.917 [2024-11-04 17:18:45.550959] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:44.917 [2024-11-04 17:18:45.550964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:44.917 [2024-11-04 17:18:45.550972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:44.917 [2024-11-04 17:18:45.551084] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:44.917 [2024-11-04 17:18:45.551090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:44.917 [2024-11-04 17:18:45.551112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.551129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.917 [2024-11-04 17:18:45.551148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.551198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.917 [2024-11-04 17:18:45.551205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.917 [2024-11-04 17:18:45.551209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.917 [2024-11-04 17:18:45.551218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:44.917 [2024-11-04 17:18:45.551229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.551245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.917 [2024-11-04 17:18:45.551262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.551325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.917 [2024-11-04 17:18:45.551334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.917 [2024-11-04 17:18:45.551338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.917 [2024-11-04 17:18:45.551348] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:44.917 [2024-11-04 17:18:45.551353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:44.917 [2024-11-04 17:18:45.551362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:44.917 [2024-11-04 17:18:45.551378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:44.917 [2024-11-04 17:18:45.551389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.917 [2024-11-04 17:18:45.551402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.917 [2024-11-04 17:18:45.551424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.917 [2024-11-04 17:18:45.551529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:44.917 [2024-11-04 17:18:45.551536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:44.917 [2024-11-04 17:18:45.551541] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551545] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f1750): datao=0, datal=4096, cccid=0 00:17:44.917 [2024-11-04 17:18:45.551551] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1655740) on tqpair(0x15f1750): expected_datao=0, payload_size=4096 00:17:44.917 [2024-11-04 17:18:45.551556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.917 [2024-11-04 17:18:45.551564] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551569] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.918 [2024-11-04 17:18:45.551585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.918 [2024-11-04 17:18:45.551589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.918 [2024-11-04 17:18:45.551602] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:44.918 [2024-11-04 17:18:45.551613] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:44.918 [2024-11-04 17:18:45.551618] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:44.918 [2024-11-04 17:18:45.551624] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:44.918 [2024-11-04 17:18:45.551629] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:44.918 [2024-11-04 17:18:45.551634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:44.918 [2024-11-04 17:18:45.551648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:44.918 [2024-11-04 17:18:45.551659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.551677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:44.918 [2024-11-04 17:18:45.551697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.918 [2024-11-04 17:18:45.551750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.918 [2024-11-04 17:18:45.551757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.918 [2024-11-04 17:18:45.551760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.918 [2024-11-04 17:18:45.551773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.551788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.918 [2024-11-04 17:18:45.551795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.551809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.918 [2024-11-04 17:18:45.551820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.551834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.918 [2024-11-04 17:18:45.551840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.551855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.918 [2024-11-04 17:18:45.551862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:44.918 [2024-11-04 17:18:45.551876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:44.918 [2024-11-04 17:18:45.551884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.551888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.551895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.918 [2024-11-04 17:18:45.551916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655740, cid 0, qid 0 00:17:44.918 [2024-11-04 17:18:45.551923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16558c0, cid 1, qid 0 00:17:44.918 [2024-11-04 17:18:45.551928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655a40, cid 2, qid 0 00:17:44.918 [2024-11-04 17:18:45.551933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.918 [2024-11-04 17:18:45.551938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655d40, cid 4, qid 0 00:17:44.918 [2024-11-04 17:18:45.552042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.918 [2024-11-04 17:18:45.552068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.918 [2024-11-04 17:18:45.552077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655d40) on tqpair=0x15f1750 00:17:44.918 [2024-11-04 17:18:45.552095] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:44.918 [2024-11-04 17:18:45.552105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:44.918 [2024-11-04 17:18:45.552121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.552142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.918 [2024-11-04 17:18:45.552179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655d40, cid 4, qid 0 00:17:44.918 [2024-11-04 17:18:45.552254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:44.918 [2024-11-04 17:18:45.552271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:44.918 [2024-11-04 17:18:45.552278] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552286] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f1750): datao=0, datal=4096, cccid=4 00:17:44.918 [2024-11-04 17:18:45.552295] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1655d40) on tqpair(0x15f1750): expected_datao=0, payload_size=4096 00:17:44.918 [2024-11-04 17:18:45.552303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552315] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552322] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.918 [2024-11-04 17:18:45.552339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.918 [2024-11-04 17:18:45.552343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655d40) on tqpair=0x15f1750 00:17:44.918 [2024-11-04 17:18:45.552363] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:44.918 [2024-11-04 17:18:45.552399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.552413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.918 [2024-11-04 17:18:45.552421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15f1750) 00:17:44.918 [2024-11-04 17:18:45.552435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:44.918 [2024-11-04 17:18:45.552465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655d40, cid 4, qid 0 00:17:44.918 [2024-11-04 17:18:45.552473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655ec0, cid 5, qid 0 00:17:44.918 [2024-11-04 17:18:45.552588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:44.918 [2024-11-04 17:18:45.552595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:44.918 [2024-11-04 17:18:45.552599] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552603] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f1750): datao=0, datal=1024, cccid=4 00:17:44.918 [2024-11-04 17:18:45.552608] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1655d40) on tqpair(0x15f1750): expected_datao=0, payload_size=1024 00:17:44.918 [2024-11-04 17:18:45.552620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552627] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552631] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:44.918 [2024-11-04 17:18:45.552637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.919 [2024-11-04 17:18:45.552643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.919 [2024-11-04 17:18:45.552647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655ec0) on tqpair=0x15f1750 00:17:44.919 [2024-11-04 17:18:45.552669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.919 [2024-11-04 17:18:45.552677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.919 [2024-11-04 17:18:45.552681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655d40) on tqpair=0x15f1750 00:17:44.919 [2024-11-04 17:18:45.552698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f1750) 00:17:44.919 [2024-11-04 17:18:45.552711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.919 [2024-11-04 17:18:45.552735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655d40, cid 4, qid 0 00:17:44.919 [2024-11-04 17:18:45.552811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:44.919 [2024-11-04 17:18:45.552818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:44.919 [2024-11-04 17:18:45.552822] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552825] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f1750): datao=0, datal=3072, cccid=4 00:17:44.919 [2024-11-04 17:18:45.552830] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1655d40) on tqpair(0x15f1750): expected_datao=0, payload_size=3072 00:17:44.919 [2024-11-04 17:18:45.552835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552842] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552846] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.919 [2024-11-04 17:18:45.552861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.919 [2024-11-04 17:18:45.552865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655d40) on tqpair=0x15f1750 00:17:44.919 [2024-11-04 17:18:45.552879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15f1750) 00:17:44.919 [2024-11-04 17:18:45.552892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.919 [2024-11-04 17:18:45.552915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655d40, cid 4, qid 0 00:17:44.919 [2024-11-04 17:18:45.552979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:44.919 [2024-11-04 17:18:45.552986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:44.919 [2024-11-04 17:18:45.552990] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.552994] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15f1750): datao=0, datal=8, cccid=4 00:17:44.919 [2024-11-04 17:18:45.552999] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1655d40) on tqpair(0x15f1750): expected_datao=0, payload_size=8 00:17:44.919 [2024-11-04 17:18:45.553003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.919 ===================================================== 00:17:44.919 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:44.919 ===================================================== 00:17:44.919 Controller Capabilities/Features 00:17:44.919 ================================ 00:17:44.919 Vendor ID: 0000 00:17:44.919 Subsystem Vendor ID: 0000 00:17:44.919 Serial Number: .................... 00:17:44.919 Model Number: ........................................ 00:17:44.919 Firmware Version: 25.01 00:17:44.919 Recommended Arb Burst: 0 00:17:44.919 IEEE OUI Identifier: 00 00 00 00:17:44.919 Multi-path I/O 00:17:44.919 May have multiple subsystem ports: No 00:17:44.919 May have multiple controllers: No 00:17:44.919 Associated with SR-IOV VF: No 00:17:44.919 Max Data Transfer Size: 131072 00:17:44.919 Max Number of Namespaces: 0 00:17:44.919 Max Number of I/O Queues: 1024 00:17:44.919 NVMe Specification Version (VS): 1.3 00:17:44.919 NVMe Specification Version (Identify): 1.3 00:17:44.919 Maximum Queue Entries: 128 00:17:44.919 Contiguous Queues Required: Yes 00:17:44.919 Arbitration Mechanisms Supported 00:17:44.919 Weighted Round Robin: Not Supported 00:17:44.919 Vendor Specific: Not Supported 00:17:44.919 Reset Timeout: 15000 ms 00:17:44.919 Doorbell Stride: 4 bytes 00:17:44.919 NVM Subsystem Reset: Not Supported 00:17:44.919 Command Sets Supported 00:17:44.919 NVM Command Set: Supported 00:17:44.919 Boot Partition: Not Supported 00:17:44.919 Memory Page Size Minimum: 4096 bytes 00:17:44.919 Memory Page Size Maximum: 4096 bytes 00:17:44.919 Persistent Memory Region: Not Supported 00:17:44.919 Optional Asynchronous Events Supported 00:17:44.919 Namespace Attribute Notices: Not Supported 00:17:44.919 Firmware Activation Notices: Not Supported 00:17:44.919 ANA Change Notices: Not Supported 00:17:44.919 PLE Aggregate Log Change Notices: Not Supported 00:17:44.919 LBA Status Info Alert Notices: Not Supported 00:17:44.919 EGE Aggregate Log Change Notices: Not Supported 00:17:44.919 Normal NVM Subsystem Shutdown event: Not Supported 00:17:44.919 Zone Descriptor Change Notices: Not Supported 00:17:44.919 Discovery Log Change Notices: Supported 00:17:44.919 Controller Attributes 00:17:44.919 128-bit Host Identifier: Not Supported 00:17:44.919 Non-Operational Permissive Mode: Not Supported 00:17:44.919 NVM Sets: Not Supported 00:17:44.919 Read Recovery Levels: Not Supported 00:17:44.919 Endurance Groups: Not Supported 00:17:44.919 Predictable Latency Mode: Not Supported 00:17:44.919 Traffic Based Keep ALive: Not Supported 00:17:44.919 Namespace Granularity: Not Supported 00:17:44.919 SQ Associations: Not Supported 00:17:44.919 UUID List: Not Supported 00:17:44.919 Multi-Domain Subsystem: Not Supported 00:17:44.919 Fixed Capacity Management: Not Supported 00:17:44.919 Variable Capacity Management: Not Supported 00:17:44.919 Delete Endurance Group: Not Supported 00:17:44.919 Delete NVM Set: Not Supported 00:17:44.919 Extended LBA Formats Supported: Not Supported 00:17:44.919 Flexible Data Placement Supported: Not Supported 00:17:44.919 00:17:44.919 Controller Memory Buffer Support 00:17:44.919 ================================ 00:17:44.919 Supported: No 00:17:44.919 00:17:44.919 Persistent Memory Region Support 00:17:44.919 ================================ 00:17:44.919 Supported: No 00:17:44.919 00:17:44.919 Admin Command Set Attributes 00:17:44.919 ============================ 00:17:44.919 Security Send/Receive: Not Supported 00:17:44.919 Format NVM: Not Supported 00:17:44.919 Firmware Activate/Download: Not Supported 00:17:44.919 Namespace Management: Not Supported 00:17:44.919 Device Self-Test: Not Supported 00:17:44.919 Directives: Not Supported 00:17:44.919 NVMe-MI: Not Supported 00:17:44.919 Virtualization Management: Not Supported 00:17:44.919 Doorbell Buffer Config: Not Supported 00:17:44.919 Get LBA Status Capability: Not Supported 00:17:44.919 Command & Feature Lockdown Capability: Not Supported 00:17:44.919 Abort Command Limit: 1 00:17:44.919 Async Event Request Limit: 4 00:17:44.919 Number of Firmware Slots: N/A 00:17:44.919 Firmware Slot 1 Read-Only: N/A 00:17:44.919 Firm[2024-11-04 17:18:45.553011] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.553015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.553029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.919 [2024-11-04 17:18:45.553037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.919 [2024-11-04 17:18:45.553041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.919 [2024-11-04 17:18:45.553045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655d40) on tqpair=0x15f1750 00:17:44.919 ware Activation Without Reset: N/A 00:17:44.919 Multiple Update Detection Support: N/A 00:17:44.919 Firmware Update Granularity: No Information Provided 00:17:44.919 Per-Namespace SMART Log: No 00:17:44.919 Asymmetric Namespace Access Log Page: Not Supported 00:17:44.919 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:44.919 Command Effects Log Page: Not Supported 00:17:44.919 Get Log Page Extended Data: Supported 00:17:44.920 Telemetry Log Pages: Not Supported 00:17:44.920 Persistent Event Log Pages: Not Supported 00:17:44.920 Supported Log Pages Log Page: May Support 00:17:44.920 Commands Supported & Effects Log Page: Not Supported 00:17:44.920 Feature Identifiers & Effects Log Page:May Support 00:17:44.920 NVMe-MI Commands & Effects Log Page: May Support 00:17:44.920 Data Area 4 for Telemetry Log: Not Supported 00:17:44.920 Error Log Page Entries Supported: 128 00:17:44.920 Keep Alive: Not Supported 00:17:44.920 00:17:44.920 NVM Command Set Attributes 00:17:44.920 ========================== 00:17:44.920 Submission Queue Entry Size 00:17:44.920 Max: 1 00:17:44.920 Min: 1 00:17:44.920 Completion Queue Entry Size 00:17:44.920 Max: 1 00:17:44.920 Min: 1 00:17:44.920 Number of Namespaces: 0 00:17:44.920 Compare Command: Not Supported 00:17:44.920 Write Uncorrectable Command: Not Supported 00:17:44.920 Dataset Management Command: Not Supported 00:17:44.920 Write Zeroes Command: Not Supported 00:17:44.920 Set Features Save Field: Not Supported 00:17:44.920 Reservations: Not Supported 00:17:44.920 Timestamp: Not Supported 00:17:44.920 Copy: Not Supported 00:17:44.920 Volatile Write Cache: Not Present 00:17:44.920 Atomic Write Unit (Normal): 1 00:17:44.920 Atomic Write Unit (PFail): 1 00:17:44.920 Atomic Compare & Write Unit: 1 00:17:44.920 Fused Compare & Write: Supported 00:17:44.920 Scatter-Gather List 00:17:44.920 SGL Command Set: Supported 00:17:44.920 SGL Keyed: Supported 00:17:44.920 SGL Bit Bucket Descriptor: Not Supported 00:17:44.920 SGL Metadata Pointer: Not Supported 00:17:44.920 Oversized SGL: Not Supported 00:17:44.920 SGL Metadata Address: Not Supported 00:17:44.920 SGL Offset: Supported 00:17:44.920 Transport SGL Data Block: Not Supported 00:17:44.920 Replay Protected Memory Block: Not Supported 00:17:44.920 00:17:44.920 Firmware Slot Information 00:17:44.920 ========================= 00:17:44.920 Active slot: 0 00:17:44.920 00:17:44.920 00:17:44.920 Error Log 00:17:44.920 ========= 00:17:44.920 00:17:44.920 Active Namespaces 00:17:44.920 ================= 00:17:44.920 Discovery Log Page 00:17:44.920 ================== 00:17:44.920 Generation Counter: 2 00:17:44.920 Number of Records: 2 00:17:44.920 Record Format: 0 00:17:44.920 00:17:44.920 Discovery Log Entry 0 00:17:44.920 ---------------------- 00:17:44.920 Transport Type: 3 (TCP) 00:17:44.920 Address Family: 1 (IPv4) 00:17:44.920 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:44.920 Entry Flags: 00:17:44.920 Duplicate Returned Information: 1 00:17:44.920 Explicit Persistent Connection Support for Discovery: 1 00:17:44.920 Transport Requirements: 00:17:44.920 Secure Channel: Not Required 00:17:44.920 Port ID: 0 (0x0000) 00:17:44.920 Controller ID: 65535 (0xffff) 00:17:44.920 Admin Max SQ Size: 128 00:17:44.920 Transport Service Identifier: 4420 00:17:44.920 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:44.920 Transport Address: 10.0.0.3 00:17:44.920 Discovery Log Entry 1 00:17:44.920 ---------------------- 00:17:44.920 Transport Type: 3 (TCP) 00:17:44.920 Address Family: 1 (IPv4) 00:17:44.920 Subsystem Type: 2 (NVM Subsystem) 00:17:44.920 Entry Flags: 00:17:44.920 Duplicate Returned Information: 0 00:17:44.920 Explicit Persistent Connection Support for Discovery: 0 00:17:44.920 Transport Requirements: 00:17:44.920 Secure Channel: Not Required 00:17:44.920 Port ID: 0 (0x0000) 00:17:44.920 Controller ID: 65535 (0xffff) 00:17:44.920 Admin Max SQ Size: 128 00:17:44.920 Transport Service Identifier: 4420 00:17:44.920 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:44.920 Transport Address: 10.0.0.3 [2024-11-04 17:18:45.553137] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:44.920 [2024-11-04 17:18:45.553151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655740) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.920 [2024-11-04 17:18:45.553165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16558c0) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.920 [2024-11-04 17:18:45.553175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655a40) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.920 [2024-11-04 17:18:45.553185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:44.920 [2024-11-04 17:18:45.553200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.920 [2024-11-04 17:18:45.553234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.920 [2024-11-04 17:18:45.553259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.920 [2024-11-04 17:18:45.553312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.920 [2024-11-04 17:18:45.553319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.920 [2024-11-04 17:18:45.553323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.920 [2024-11-04 17:18:45.553354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.920 [2024-11-04 17:18:45.553376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.920 [2024-11-04 17:18:45.553442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.920 [2024-11-04 17:18:45.553449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.920 [2024-11-04 17:18:45.553453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553463] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:44.920 [2024-11-04 17:18:45.553468] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:44.920 [2024-11-04 17:18:45.553478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.920 [2024-11-04 17:18:45.553494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.920 [2024-11-04 17:18:45.553512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.920 [2024-11-04 17:18:45.553581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.920 [2024-11-04 17:18:45.553589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.920 [2024-11-04 17:18:45.553593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.920 [2024-11-04 17:18:45.553625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.920 [2024-11-04 17:18:45.553644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.920 [2024-11-04 17:18:45.553692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.920 [2024-11-04 17:18:45.553699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.920 [2024-11-04 17:18:45.553703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.920 [2024-11-04 17:18:45.553741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.920 [2024-11-04 17:18:45.553758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.920 [2024-11-04 17:18:45.553806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.920 [2024-11-04 17:18:45.553812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.920 [2024-11-04 17:18:45.553816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.920 [2024-11-04 17:18:45.553821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.920 [2024-11-04 17:18:45.553831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.553836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.553840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.921 [2024-11-04 17:18:45.553847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.921 [2024-11-04 17:18:45.553864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.921 [2024-11-04 17:18:45.553909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.921 [2024-11-04 17:18:45.553916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.921 [2024-11-04 17:18:45.553920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.553924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.921 [2024-11-04 17:18:45.553935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.553940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.553943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.921 [2024-11-04 17:18:45.553951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.921 [2024-11-04 17:18:45.553968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.921 [2024-11-04 17:18:45.554012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.921 [2024-11-04 17:18:45.554019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.921 [2024-11-04 17:18:45.554023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.554027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.921 [2024-11-04 17:18:45.554038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.554042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.554046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.921 [2024-11-04 17:18:45.554054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.921 [2024-11-04 17:18:45.554070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.921 [2024-11-04 17:18:45.554115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.921 [2024-11-04 17:18:45.554122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.921 [2024-11-04 17:18:45.554126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.554130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.921 [2024-11-04 17:18:45.554140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.554145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.554149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.921 [2024-11-04 17:18:45.554156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.921 [2024-11-04 17:18:45.554173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.921 [2024-11-04 17:18:45.558243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.921 [2024-11-04 17:18:45.558264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.921 [2024-11-04 17:18:45.558285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.558290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.921 [2024-11-04 17:18:45.558305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.558311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.558315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15f1750) 00:17:44.921 [2024-11-04 17:18:45.558324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.921 [2024-11-04 17:18:45.558351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1655bc0, cid 3, qid 0 00:17:44.921 [2024-11-04 17:18:45.558404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:44.921 [2024-11-04 17:18:45.558411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:44.921 [2024-11-04 17:18:45.558415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:44.921 [2024-11-04 17:18:45.558419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1655bc0) on tqpair=0x15f1750 00:17:44.921 [2024-11-04 17:18:45.558428] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:17:44.921 00:17:44.921 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:44.921 [2024-11-04 17:18:45.606383] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:44.921 [2024-11-04 17:18:45.606441] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74116 ] 00:17:45.184 [2024-11-04 17:18:45.769553] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:45.184 [2024-11-04 17:18:45.769610] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:45.185 [2024-11-04 17:18:45.769617] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:45.185 [2024-11-04 17:18:45.769628] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:45.185 [2024-11-04 17:18:45.769637] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:45.185 [2024-11-04 17:18:45.769930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:45.185 [2024-11-04 17:18:45.770009] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe96750 0 00:17:45.185 [2024-11-04 17:18:45.775269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:45.185 [2024-11-04 17:18:45.775317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:45.185 [2024-11-04 17:18:45.775323] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:45.185 [2024-11-04 17:18:45.775327] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:45.185 [2024-11-04 17:18:45.775357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.775364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.775368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.775380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:45.185 [2024-11-04 17:18:45.775428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.783303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.185 [2024-11-04 17:18:45.783328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.185 [2024-11-04 17:18:45.783349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.185 [2024-11-04 17:18:45.783367] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:45.185 [2024-11-04 17:18:45.783380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:45.185 [2024-11-04 17:18:45.783387] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:45.185 [2024-11-04 17:18:45.783402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.783437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.185 [2024-11-04 17:18:45.783467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.783525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.185 [2024-11-04 17:18:45.783533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.185 [2024-11-04 17:18:45.783536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.185 [2024-11-04 17:18:45.783552] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:45.185 [2024-11-04 17:18:45.783560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:45.185 [2024-11-04 17:18:45.783568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.783585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.185 [2024-11-04 17:18:45.783605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.783656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.185 [2024-11-04 17:18:45.783663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.185 [2024-11-04 17:18:45.783666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.185 [2024-11-04 17:18:45.783677] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:45.185 [2024-11-04 17:18:45.783685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:45.185 [2024-11-04 17:18:45.783693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.783709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.185 [2024-11-04 17:18:45.783728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.783786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.185 [2024-11-04 17:18:45.783793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.185 [2024-11-04 17:18:45.783796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.185 [2024-11-04 17:18:45.783807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:45.185 [2024-11-04 17:18:45.783817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.783833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.185 [2024-11-04 17:18:45.783852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.783901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.185 [2024-11-04 17:18:45.783907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.185 [2024-11-04 17:18:45.783911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.783915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.185 [2024-11-04 17:18:45.783920] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:45.185 [2024-11-04 17:18:45.783926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:45.185 [2024-11-04 17:18:45.783934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:45.185 [2024-11-04 17:18:45.784061] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:45.185 [2024-11-04 17:18:45.784067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:45.185 [2024-11-04 17:18:45.784075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.784080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.784084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.784091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.185 [2024-11-04 17:18:45.784111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.784156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.185 [2024-11-04 17:18:45.784162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.185 [2024-11-04 17:18:45.784166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.784170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.185 [2024-11-04 17:18:45.784176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:45.185 [2024-11-04 17:18:45.784186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.784190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.784194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.784202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.185 [2024-11-04 17:18:45.784235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.784294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.185 [2024-11-04 17:18:45.784303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.185 [2024-11-04 17:18:45.784307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.784312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.185 [2024-11-04 17:18:45.784324] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:45.185 [2024-11-04 17:18:45.784330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:45.185 [2024-11-04 17:18:45.784338] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:45.185 [2024-11-04 17:18:45.784354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:45.185 [2024-11-04 17:18:45.784365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.185 [2024-11-04 17:18:45.784369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.185 [2024-11-04 17:18:45.784378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.185 [2024-11-04 17:18:45.784401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.185 [2024-11-04 17:18:45.784502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.185 [2024-11-04 17:18:45.784509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.186 [2024-11-04 17:18:45.784513] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784517] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=4096, cccid=0 00:17:45.186 [2024-11-04 17:18:45.784522] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefa740) on tqpair(0xe96750): expected_datao=0, payload_size=4096 00:17:45.186 [2024-11-04 17:18:45.784527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784536] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784544] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.186 [2024-11-04 17:18:45.784567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.186 [2024-11-04 17:18:45.784571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.186 [2024-11-04 17:18:45.784585] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:45.186 [2024-11-04 17:18:45.784591] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:45.186 [2024-11-04 17:18:45.784596] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:45.186 [2024-11-04 17:18:45.784600] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:45.186 [2024-11-04 17:18:45.784605] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:45.186 [2024-11-04 17:18:45.784611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.784626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.784638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.784656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.186 [2024-11-04 17:18:45.784679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.186 [2024-11-04 17:18:45.784726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.186 [2024-11-04 17:18:45.784733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.186 [2024-11-04 17:18:45.784737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.186 [2024-11-04 17:18:45.784749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.784765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.186 [2024-11-04 17:18:45.784771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.784786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.186 [2024-11-04 17:18:45.784792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.784806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.186 [2024-11-04 17:18:45.784813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.784827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.186 [2024-11-04 17:18:45.784833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.784846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.784855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.784859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.784866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.186 [2024-11-04 17:18:45.784887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa740, cid 0, qid 0 00:17:45.186 [2024-11-04 17:18:45.784895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefa8c0, cid 1, qid 0 00:17:45.186 [2024-11-04 17:18:45.784900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefaa40, cid 2, qid 0 00:17:45.186 [2024-11-04 17:18:45.784905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.186 [2024-11-04 17:18:45.784910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefad40, cid 4, qid 0 00:17:45.186 [2024-11-04 17:18:45.785021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.186 [2024-11-04 17:18:45.785028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.186 [2024-11-04 17:18:45.785031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefad40) on tqpair=0xe96750 00:17:45.186 [2024-11-04 17:18:45.785041] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:45.186 [2024-11-04 17:18:45.785046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.785055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.785066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.785074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.785089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:45.186 [2024-11-04 17:18:45.785108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefad40, cid 4, qid 0 00:17:45.186 [2024-11-04 17:18:45.785161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.186 [2024-11-04 17:18:45.785168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.186 [2024-11-04 17:18:45.785171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefad40) on tqpair=0xe96750 00:17:45.186 [2024-11-04 17:18:45.785274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.785298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.785308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.785320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.186 [2024-11-04 17:18:45.785343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefad40, cid 4, qid 0 00:17:45.186 [2024-11-04 17:18:45.785412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.186 [2024-11-04 17:18:45.785419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.186 [2024-11-04 17:18:45.785423] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785428] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=4096, cccid=4 00:17:45.186 [2024-11-04 17:18:45.785433] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefad40) on tqpair(0xe96750): expected_datao=0, payload_size=4096 00:17:45.186 [2024-11-04 17:18:45.785437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785445] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785450] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.186 [2024-11-04 17:18:45.785464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.186 [2024-11-04 17:18:45.785468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefad40) on tqpair=0xe96750 00:17:45.186 [2024-11-04 17:18:45.785489] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:45.186 [2024-11-04 17:18:45.785503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.785515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:45.186 [2024-11-04 17:18:45.785524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe96750) 00:17:45.186 [2024-11-04 17:18:45.785536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.186 [2024-11-04 17:18:45.785576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefad40, cid 4, qid 0 00:17:45.186 [2024-11-04 17:18:45.785704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.186 [2024-11-04 17:18:45.785712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.186 [2024-11-04 17:18:45.785716] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.186 [2024-11-04 17:18:45.785720] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=4096, cccid=4 00:17:45.186 [2024-11-04 17:18:45.785725] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefad40) on tqpair(0xe96750): expected_datao=0, payload_size=4096 00:17:45.187 [2024-11-04 17:18:45.785730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785738] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785742] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.187 [2024-11-04 17:18:45.785757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.187 [2024-11-04 17:18:45.785761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefad40) on tqpair=0xe96750 00:17:45.187 [2024-11-04 17:18:45.785783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.785795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.785805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.785817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.785840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefad40, cid 4, qid 0 00:17:45.187 [2024-11-04 17:18:45.785902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.187 [2024-11-04 17:18:45.785909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.187 [2024-11-04 17:18:45.785913] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785917] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=4096, cccid=4 00:17:45.187 [2024-11-04 17:18:45.785921] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefad40) on tqpair(0xe96750): expected_datao=0, payload_size=4096 00:17:45.187 [2024-11-04 17:18:45.785926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785934] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785938] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.187 [2024-11-04 17:18:45.785953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.187 [2024-11-04 17:18:45.785957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.785961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefad40) on tqpair=0xe96750 00:17:45.187 [2024-11-04 17:18:45.785970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.785980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.786003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.786010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.786016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.786021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.786027] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:45.187 [2024-11-04 17:18:45.786032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:45.187 [2024-11-04 17:18:45.786037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:45.187 [2024-11-04 17:18:45.786053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786077] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.187 [2024-11-04 17:18:45.786113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefad40, cid 4, qid 0 00:17:45.187 [2024-11-04 17:18:45.786121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefaec0, cid 5, qid 0 00:17:45.187 [2024-11-04 17:18:45.786186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.187 [2024-11-04 17:18:45.786193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.187 [2024-11-04 17:18:45.786197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefad40) on tqpair=0xe96750 00:17:45.187 [2024-11-04 17:18:45.786208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.187 [2024-11-04 17:18:45.786214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.187 [2024-11-04 17:18:45.786245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefaec0) on tqpair=0xe96750 00:17:45.187 [2024-11-04 17:18:45.786262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefaec0, cid 5, qid 0 00:17:45.187 [2024-11-04 17:18:45.786347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.187 [2024-11-04 17:18:45.786354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.187 [2024-11-04 17:18:45.786358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefaec0) on tqpair=0xe96750 00:17:45.187 [2024-11-04 17:18:45.786373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefaec0, cid 5, qid 0 00:17:45.187 [2024-11-04 17:18:45.786462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.187 [2024-11-04 17:18:45.786469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.187 [2024-11-04 17:18:45.786473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefaec0) on tqpair=0xe96750 00:17:45.187 [2024-11-04 17:18:45.786488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefaec0, cid 5, qid 0 00:17:45.187 [2024-11-04 17:18:45.786580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.187 [2024-11-04 17:18:45.786586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.187 [2024-11-04 17:18:45.786590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefaec0) on tqpair=0xe96750 00:17:45.187 [2024-11-04 17:18:45.786614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe96750) 00:17:45.187 [2024-11-04 17:18:45.786685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.187 [2024-11-04 17:18:45.786705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefaec0, cid 5, qid 0 00:17:45.187 [2024-11-04 17:18:45.786713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefad40, cid 4, qid 0 00:17:45.187 [2024-11-04 17:18:45.786718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefb040, cid 6, qid 0 00:17:45.187 [2024-11-04 17:18:45.786723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefb1c0, cid 7, qid 0 00:17:45.187 [2024-11-04 17:18:45.786861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.187 [2024-11-04 17:18:45.786868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.187 [2024-11-04 17:18:45.786872] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.187 [2024-11-04 17:18:45.786876] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=8192, cccid=5 00:17:45.187 [2024-11-04 17:18:45.786881] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefaec0) on tqpair(0xe96750): expected_datao=0, payload_size=8192 00:17:45.188 [2024-11-04 17:18:45.786885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786902] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786923] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.188 [2024-11-04 17:18:45.786934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.188 [2024-11-04 17:18:45.786938] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786942] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=512, cccid=4 00:17:45.188 [2024-11-04 17:18:45.786946] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefad40) on tqpair(0xe96750): expected_datao=0, payload_size=512 00:17:45.188 [2024-11-04 17:18:45.786951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786957] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786961] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.188 [2024-11-04 17:18:45.786972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.188 [2024-11-04 17:18:45.786976] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786979] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=512, cccid=6 00:17:45.188 [2024-11-04 17:18:45.786984] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefb040) on tqpair(0xe96750): expected_datao=0, payload_size=512 00:17:45.188 [2024-11-04 17:18:45.786988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786995] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.786999] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:45.188 [2024-11-04 17:18:45.787010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:45.188 [2024-11-04 17:18:45.787013] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787017] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe96750): datao=0, datal=4096, cccid=7 00:17:45.188 [2024-11-04 17:18:45.787021] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefb1c0) on tqpair(0xe96750): expected_datao=0, payload_size=4096 00:17:45.188 [2024-11-04 17:18:45.787026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787032] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787036] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.188 [2024-11-04 17:18:45.787050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.188 [2024-11-04 17:18:45.787054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefaec0) on tqpair=0xe96750 00:17:45.188 [2024-11-04 17:18:45.787074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.188 [2024-11-04 17:18:45.787081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.188 [2024-11-04 17:18:45.787085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefad40) on tqpair=0xe96750 00:17:45.188 [2024-11-04 17:18:45.787102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.188 [2024-11-04 17:18:45.787108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.188 [2024-11-04 17:18:45.787112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.188 [2024-11-04 17:18:45.787116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefb040) on tqpair=0xe96750 00:17:45.188 [2024-11-04 17:18:45.787123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.188 [2024-11-04 17:18:45.787129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.188 ===================================================== 00:17:45.188 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.188 ===================================================== 00:17:45.188 Controller Capabilities/Features 00:17:45.188 ================================ 00:17:45.188 Vendor ID: 8086 00:17:45.188 Subsystem Vendor ID: 8086 00:17:45.188 Serial Number: SPDK00000000000001 00:17:45.188 Model Number: SPDK bdev Controller 00:17:45.188 Firmware Version: 25.01 00:17:45.188 Recommended Arb Burst: 6 00:17:45.188 IEEE OUI Identifier: e4 d2 5c 00:17:45.188 Multi-path I/O 00:17:45.188 May have multiple subsystem ports: Yes 00:17:45.188 May have multiple controllers: Yes 00:17:45.188 Associated with SR-IOV VF: No 00:17:45.188 Max Data Transfer Size: 131072 00:17:45.188 Max Number of Namespaces: 32 00:17:45.188 Max Number of I/O Queues: 127 00:17:45.188 NVMe Specification Version (VS): 1.3 00:17:45.188 NVMe Specification Version (Identify): 1.3 00:17:45.188 Maximum Queue Entries: 128 00:17:45.188 Contiguous Queues Required: Yes 00:17:45.188 Arbitration Mechanisms Supported 00:17:45.188 Weighted Round Robin: Not Supported 00:17:45.188 Vendor Specific: Not Supported 00:17:45.188 Reset Timeout: 15000 ms 00:17:45.188 Doorbell Stride: 4 bytes 00:17:45.188 NVM Subsystem Reset: Not Supported 00:17:45.188 Command Sets Supported 00:17:45.188 NVM Command Set: Supported 00:17:45.188 Boot Partition: Not Supported 00:17:45.188 Memory Page Size Minimum: 4096 bytes 00:17:45.188 Memory Page Size Maximum: 4096 bytes 00:17:45.188 Persistent Memory Region: Not Supported 00:17:45.188 Optional Asynchronous Events Supported 00:17:45.188 Namespace Attribute Notices: Supported 00:17:45.188 Firmware Activation Notices: Not Supported 00:17:45.188 ANA Change Notices: Not Supported 00:17:45.188 PLE Aggregate Log Change Notices: Not Supported 00:17:45.188 LBA Status Info Alert Notices: Not Supported 00:17:45.188 EGE Aggregate Log Change Notices: Not Supported 00:17:45.188 Normal NVM Subsystem Shutdown event: Not Supported 00:17:45.188 Zone Descriptor Change Notices: Not Supported 00:17:45.188 Discovery Log Change Notices: Not Supported 00:17:45.188 Controller Attributes 00:17:45.188 128-bit Host Identifier: Supported 00:17:45.188 Non-Operational Permissive Mode: Not Supported 00:17:45.188 NVM Sets: Not Supported 00:17:45.188 Read Recovery Levels: Not Supported 00:17:45.188 Endurance Groups: Not Supported 00:17:45.188 Predictable Latency Mode: Not Supported 00:17:45.188 Traffic Based Keep ALive: Not Supported 00:17:45.188 Namespace Granularity: Not Supported 00:17:45.188 SQ Associations: Not Supported 00:17:45.188 UUID List: Not Supported 00:17:45.188 Multi-Domain Subsystem: Not Supported 00:17:45.188 Fixed Capacity Management: Not Supported 00:17:45.188 Variable Capacity Management: Not Supported 00:17:45.188 Delete Endurance Group: Not Supported 00:17:45.188 Delete NVM Set: Not Supported 00:17:45.188 Extended LBA Formats Supported: Not Supported 00:17:45.188 Flexible Data Placement Supported: Not Supported 00:17:45.188 00:17:45.188 Controller Memory Buffer Support 00:17:45.188 ================================ 00:17:45.188 Supported: No 00:17:45.188 00:17:45.188 Persistent Memory Region Support 00:17:45.188 ================================ 00:17:45.188 Supported: No 00:17:45.188 00:17:45.188 Admin Command Set Attributes 00:17:45.188 ============================ 00:17:45.188 Security Send/Receive: Not Supported 00:17:45.188 Format NVM: Not Supported 00:17:45.188 Firmware Activate/Download: Not Supported 00:17:45.188 Namespace Management: Not Supported 00:17:45.188 Device Self-Test: Not Supported 00:17:45.188 Directives: Not Supported 00:17:45.188 NVMe-MI: Not Supported 00:17:45.188 Virtualization Management: Not Supported 00:17:45.188 Doorbell Buffer Config: Not Supported 00:17:45.188 Get LBA Status Capability: Not Supported 00:17:45.188 Command & Feature Lockdown Capability: Not Supported 00:17:45.188 Abort Command Limit: 4 00:17:45.188 Async Event Request Limit: 4 00:17:45.188 Number of Firmware Slots: N/A 00:17:45.188 Firmware Slot 1 Read-Only: N/A 00:17:45.188 Firmware Activation Without Reset: N/A 00:17:45.188 Multiple Update Detection Support: N/A 00:17:45.188 Firmware Update Granularity: No Information Provided 00:17:45.188 Per-Namespace SMART Log: No 00:17:45.188 Asymmetric Namespace Access Log Page: Not Supported 00:17:45.188 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:45.188 Command Effects Log Page: Supported 00:17:45.188 Get Log Page Extended Data: Supported 00:17:45.188 Telemetry Log Pages: Not Supported 00:17:45.188 Persistent Event Log Pages: Not Supported 00:17:45.188 Supported Log Pages Log Page: May Support 00:17:45.188 Commands Supported & Effects Log Page: Not Supported 00:17:45.188 Feature Identifiers & Effects Log Page:May Support 00:17:45.188 NVMe-MI Commands & Effects Log Page: May Support 00:17:45.188 Data Area 4 for Telemetry Log: Not Supported 00:17:45.188 Error Log Page Entries Supported: 128 00:17:45.188 Keep Alive: Supported 00:17:45.188 Keep Alive Granularity: 10000 ms 00:17:45.188 00:17:45.188 NVM Command Set Attributes 00:17:45.188 ========================== 00:17:45.188 Submission Queue Entry Size 00:17:45.188 Max: 64 00:17:45.188 Min: 64 00:17:45.188 Completion Queue Entry Size 00:17:45.188 Max: 16 00:17:45.188 Min: 16 00:17:45.188 Number of Namespaces: 32 00:17:45.188 Compare Command: Supported 00:17:45.188 Write Uncorrectable Command: Not Supported 00:17:45.188 Dataset Management Command: Supported 00:17:45.188 Write Zeroes Command: Supported 00:17:45.188 Set Features Save Field: Not Supported 00:17:45.188 Reservations: Supported 00:17:45.188 Timestamp: Not Supported 00:17:45.188 Copy: Supported 00:17:45.188 Volatile Write Cache: Present 00:17:45.189 Atomic Write Unit (Normal): 1 00:17:45.189 Atomic Write Unit (PFail): 1 00:17:45.189 Atomic Compare & Write Unit: 1 00:17:45.189 Fused Compare & Write: Supported 00:17:45.189 Scatter-Gather List 00:17:45.189 SGL Command Set: Supported 00:17:45.189 SGL Keyed: Supported 00:17:45.189 SGL Bit Bucket Descriptor: Not Supported 00:17:45.189 SGL Metadata Pointer: Not Supported 00:17:45.189 Oversized SGL: Not Supported 00:17:45.189 SGL Metadata Address: Not Supported 00:17:45.189 SGL Offset: Supported 00:17:45.189 Transport SGL Data Block: Not Supported 00:17:45.189 Replay Protected Memory Block: Not Supported 00:17:45.189 00:17:45.189 Firmware Slot Information 00:17:45.189 ========================= 00:17:45.189 Active slot: 1 00:17:45.189 Slot 1 Firmware Revision: 25.01 00:17:45.189 00:17:45.189 00:17:45.189 Commands Supported and Effects 00:17:45.189 ============================== 00:17:45.189 Admin Commands 00:17:45.189 -------------- 00:17:45.189 Get Log Page (02h): Supported 00:17:45.189 Identify (06h): Supported 00:17:45.189 Abort (08h): Supported 00:17:45.189 Set Features (09h): Supported 00:17:45.189 Get Features (0Ah): Supported 00:17:45.189 Asynchronous Event Request (0Ch): Supported 00:17:45.189 Keep Alive (18h): Supported 00:17:45.189 I/O Commands 00:17:45.189 ------------ 00:17:45.189 Flush (00h): Supported LBA-Change 00:17:45.189 Write (01h): Supported LBA-Change 00:17:45.189 Read (02h): Supported 00:17:45.189 Compare (05h): Supported 00:17:45.189 Write Zeroes (08h): Supported LBA-Change 00:17:45.189 Dataset Management (09h): Supported LBA-Change 00:17:45.189 Copy (19h): Supported LBA-Change 00:17:45.189 00:17:45.189 Error Log 00:17:45.189 ========= 00:17:45.189 00:17:45.189 Arbitration 00:17:45.189 =========== 00:17:45.189 Arbitration Burst: 1 00:17:45.189 00:17:45.189 Power Management 00:17:45.189 ================ 00:17:45.189 Number of Power States: 1 00:17:45.189 Current Power State: Power State #0 00:17:45.189 Power State #0: 00:17:45.189 Max Power: 0.00 W 00:17:45.189 Non-Operational State: Operational 00:17:45.189 Entry Latency: Not Reported 00:17:45.189 Exit Latency: Not Reported 00:17:45.189 Relative Read Throughput: 0 00:17:45.189 Relative Read Latency: 0 00:17:45.189 Relative Write Throughput: 0 00:17:45.189 Relative Write Latency: 0 00:17:45.189 Idle Power: Not Reported 00:17:45.189 Active Power: Not Reported 00:17:45.189 Non-Operational Permissive Mode: Not Supported 00:17:45.189 00:17:45.189 Health Information 00:17:45.189 ================== 00:17:45.189 Critical Warnings: 00:17:45.189 Available Spare Space: OK 00:17:45.189 Temperature: OK 00:17:45.189 Device Reliability: OK 00:17:45.189 Read Only: No 00:17:45.189 Volatile Memory Backup: OK 00:17:45.189 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:45.189 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:45.189 Available Spare: 0% 00:17:45.189 Available Spare Threshold: 0% 00:17:45.189 Life Percentage Used:[2024-11-04 17:18:45.787132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefb1c0) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe96750) 00:17:45.189 [2024-11-04 17:18:45.787297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.189 [2024-11-04 17:18:45.787322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefb1c0, cid 7, qid 0 00:17:45.189 [2024-11-04 17:18:45.787372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.189 [2024-11-04 17:18:45.787379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.189 [2024-11-04 17:18:45.787383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefb1c0) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787446] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:45.189 [2024-11-04 17:18:45.787458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa740) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.189 [2024-11-04 17:18:45.787471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefa8c0) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.189 [2024-11-04 17:18:45.787481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefaa40) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.189 [2024-11-04 17:18:45.787491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.189 [2024-11-04 17:18:45.787505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.189 [2024-11-04 17:18:45.787521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.189 [2024-11-04 17:18:45.787544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.189 [2024-11-04 17:18:45.787590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.189 [2024-11-04 17:18:45.787597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.189 [2024-11-04 17:18:45.787601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.189 [2024-11-04 17:18:45.787633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.189 [2024-11-04 17:18:45.787655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.189 [2024-11-04 17:18:45.787718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.189 [2024-11-04 17:18:45.787725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.189 [2024-11-04 17:18:45.787729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787734] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787739] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:45.189 [2024-11-04 17:18:45.787744] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:45.189 [2024-11-04 17:18:45.787754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.189 [2024-11-04 17:18:45.787771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.189 [2024-11-04 17:18:45.787789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.189 [2024-11-04 17:18:45.787834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.189 [2024-11-04 17:18:45.787840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.189 [2024-11-04 17:18:45.787844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.189 [2024-11-04 17:18:45.787848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.189 [2024-11-04 17:18:45.787859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.787864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.787868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.787876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.787908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.787951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.787958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.787962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.787966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.787976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.787981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.787984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.787991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.788914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.788931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.788974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.788981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.788985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.788989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.788999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.789003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.789007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.789014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.789032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.789078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.789084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.789088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.789092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.190 [2024-11-04 17:18:45.789102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.789107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.789110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.190 [2024-11-04 17:18:45.789118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.190 [2024-11-04 17:18:45.789135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.190 [2024-11-04 17:18:45.789178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.190 [2024-11-04 17:18:45.789184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.190 [2024-11-04 17:18:45.789188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.190 [2024-11-04 17:18:45.789192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.789233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.789268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.789320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.789326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.789330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.789361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.789380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.789423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.789430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.789433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.789464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.789482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.789533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.789551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.789556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.789588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.789607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.789659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.789666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.789669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.789700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.789718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.789763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.789770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.789773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.789804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.789821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.789869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.789876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.789880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.789911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.789928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.789973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.789980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.789984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.789988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.789999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.790026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.790043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.790086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.790093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.790096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.790111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.790126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.790143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.790190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.790196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.790200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.790214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.790255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.790274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.790326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.790333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.790337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.790352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.790368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.790385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.191 [2024-11-04 17:18:45.790434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.191 [2024-11-04 17:18:45.790441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.191 [2024-11-04 17:18:45.790445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.191 [2024-11-04 17:18:45.790459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.191 [2024-11-04 17:18:45.790468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.191 [2024-11-04 17:18:45.790476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.191 [2024-11-04 17:18:45.790493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.790538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.790545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.790548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.790563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.192 [2024-11-04 17:18:45.790579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.192 [2024-11-04 17:18:45.790597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.790642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.790648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.790652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.790667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.192 [2024-11-04 17:18:45.790683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.192 [2024-11-04 17:18:45.790700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.790745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.790752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.790756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.790770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.192 [2024-11-04 17:18:45.790787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.192 [2024-11-04 17:18:45.790804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.790855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.790861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.790865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.790880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.192 [2024-11-04 17:18:45.790896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.192 [2024-11-04 17:18:45.790913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.790976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.790982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.790986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.790990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.791000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.192 [2024-11-04 17:18:45.791016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.192 [2024-11-04 17:18:45.791032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.791079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.791085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.791089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.791103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.192 [2024-11-04 17:18:45.791119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.192 [2024-11-04 17:18:45.791136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.791179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.791186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.791189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.791204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.791212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe96750) 00:17:45.192 [2024-11-04 17:18:45.795296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.192 [2024-11-04 17:18:45.795354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefabc0, cid 3, qid 0 00:17:45.192 [2024-11-04 17:18:45.795435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:45.192 [2024-11-04 17:18:45.795443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:45.192 [2024-11-04 17:18:45.795447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:45.192 [2024-11-04 17:18:45.795451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefabc0) on tqpair=0xe96750 00:17:45.192 [2024-11-04 17:18:45.795461] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:45.192 0% 00:17:45.192 Data Units Read: 0 00:17:45.192 Data Units Written: 0 00:17:45.192 Host Read Commands: 0 00:17:45.192 Host Write Commands: 0 00:17:45.192 Controller Busy Time: 0 minutes 00:17:45.192 Power Cycles: 0 00:17:45.192 Power On Hours: 0 hours 00:17:45.192 Unsafe Shutdowns: 0 00:17:45.192 Unrecoverable Media Errors: 0 00:17:45.192 Lifetime Error Log Entries: 0 00:17:45.192 Warning Temperature Time: 0 minutes 00:17:45.192 Critical Temperature Time: 0 minutes 00:17:45.192 00:17:45.192 Number of Queues 00:17:45.192 ================ 00:17:45.192 Number of I/O Submission Queues: 127 00:17:45.192 Number of I/O Completion Queues: 127 00:17:45.192 00:17:45.192 Active Namespaces 00:17:45.192 ================= 00:17:45.192 Namespace ID:1 00:17:45.192 Error Recovery Timeout: Unlimited 00:17:45.192 Command Set Identifier: NVM (00h) 00:17:45.192 Deallocate: Supported 00:17:45.192 Deallocated/Unwritten Error: Not Supported 00:17:45.192 Deallocated Read Value: Unknown 00:17:45.192 Deallocate in Write Zeroes: Not Supported 00:17:45.192 Deallocated Guard Field: 0xFFFF 00:17:45.192 Flush: Supported 00:17:45.192 Reservation: Supported 00:17:45.192 Namespace Sharing Capabilities: Multiple Controllers 00:17:45.192 Size (in LBAs): 131072 (0GiB) 00:17:45.192 Capacity (in LBAs): 131072 (0GiB) 00:17:45.192 Utilization (in LBAs): 131072 (0GiB) 00:17:45.192 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:45.192 EUI64: ABCDEF0123456789 00:17:45.192 UUID: 3c4bb496-ace1-4561-acf5-a73480081da0 00:17:45.192 Thin Provisioning: Not Supported 00:17:45.192 Per-NS Atomic Units: Yes 00:17:45.192 Atomic Boundary Size (Normal): 0 00:17:45.192 Atomic Boundary Size (PFail): 0 00:17:45.192 Atomic Boundary Offset: 0 00:17:45.192 Maximum Single Source Range Length: 65535 00:17:45.192 Maximum Copy Length: 65535 00:17:45.192 Maximum Source Range Count: 1 00:17:45.192 NGUID/EUI64 Never Reused: No 00:17:45.192 Namespace Write Protected: No 00:17:45.192 Number of LBA Formats: 1 00:17:45.192 Current LBA Format: LBA Format #00 00:17:45.192 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:45.192 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.192 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.192 rmmod nvme_tcp 00:17:45.193 rmmod nvme_fabrics 00:17:45.193 rmmod nvme_keyring 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74079 ']' 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74079 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 74079 ']' 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 74079 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74079 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:45.193 killing process with pid 74079 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74079' 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 74079 00:17:45.193 17:18:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 74079 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:45.452 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:45.710 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:45.710 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:45.711 00:17:45.711 real 0m3.125s 00:17:45.711 user 0m7.923s 00:17:45.711 sys 0m0.846s 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:45.711 17:18:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.711 ************************************ 00:17:45.711 END TEST nvmf_identify 00:17:45.711 ************************************ 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.969 ************************************ 00:17:45.969 START TEST nvmf_perf 00:17:45.969 ************************************ 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:45.969 * Looking for test storage... 00:17:45.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.969 --rc genhtml_branch_coverage=1 00:17:45.969 --rc genhtml_function_coverage=1 00:17:45.969 --rc genhtml_legend=1 00:17:45.969 --rc geninfo_all_blocks=1 00:17:45.969 --rc geninfo_unexecuted_blocks=1 00:17:45.969 00:17:45.969 ' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.969 --rc genhtml_branch_coverage=1 00:17:45.969 --rc genhtml_function_coverage=1 00:17:45.969 --rc genhtml_legend=1 00:17:45.969 --rc geninfo_all_blocks=1 00:17:45.969 --rc geninfo_unexecuted_blocks=1 00:17:45.969 00:17:45.969 ' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.969 --rc genhtml_branch_coverage=1 00:17:45.969 --rc genhtml_function_coverage=1 00:17:45.969 --rc genhtml_legend=1 00:17:45.969 --rc geninfo_all_blocks=1 00:17:45.969 --rc geninfo_unexecuted_blocks=1 00:17:45.969 00:17:45.969 ' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:45.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.969 --rc genhtml_branch_coverage=1 00:17:45.969 --rc genhtml_function_coverage=1 00:17:45.969 --rc genhtml_legend=1 00:17:45.969 --rc geninfo_all_blocks=1 00:17:45.969 --rc geninfo_unexecuted_blocks=1 00:17:45.969 00:17:45.969 ' 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.969 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:45.970 Cannot find device "nvmf_init_br" 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:45.970 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:46.228 Cannot find device "nvmf_init_br2" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:46.228 Cannot find device "nvmf_tgt_br" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.228 Cannot find device "nvmf_tgt_br2" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:46.228 Cannot find device "nvmf_init_br" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:46.228 Cannot find device "nvmf_init_br2" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:46.228 Cannot find device "nvmf_tgt_br" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:46.228 Cannot find device "nvmf_tgt_br2" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:46.228 Cannot find device "nvmf_br" 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:46.228 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:46.228 Cannot find device "nvmf_init_if" 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:46.229 Cannot find device "nvmf_init_if2" 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.229 17:18:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.229 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:46.229 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:46.229 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:46.229 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:46.229 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:46.229 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:46.229 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.488 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:46.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:17:46.489 00:17:46.489 --- 10.0.0.3 ping statistics --- 00:17:46.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.489 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:46.489 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:46.489 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:17:46.489 00:17:46.489 --- 10.0.0.4 ping statistics --- 00:17:46.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.489 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:46.489 00:17:46.489 --- 10.0.0.1 ping statistics --- 00:17:46.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.489 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:46.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:46.489 00:17:46.489 --- 10.0.0.2 ping statistics --- 00:17:46.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.489 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74343 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74343 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 74343 ']' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:46.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:46.489 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:46.489 [2024-11-04 17:18:47.225296] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:17:46.489 [2024-11-04 17:18:47.225392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.748 [2024-11-04 17:18:47.373945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.748 [2024-11-04 17:18:47.430777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.748 [2024-11-04 17:18:47.430880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.748 [2024-11-04 17:18:47.430907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.748 [2024-11-04 17:18:47.430915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.748 [2024-11-04 17:18:47.430921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.748 [2024-11-04 17:18:47.432248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.748 [2024-11-04 17:18:47.432334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.748 [2024-11-04 17:18:47.432442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.748 [2024-11-04 17:18:47.432448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.748 [2024-11-04 17:18:47.495133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:47.006 17:18:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:47.265 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:47.265 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:47.833 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:47.833 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:48.092 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:48.092 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:48.092 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:48.092 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:48.092 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.358 [2024-11-04 17:18:48.908427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.358 17:18:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:48.628 17:18:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:48.628 17:18:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.886 17:18:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:48.886 17:18:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:49.145 17:18:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:49.403 [2024-11-04 17:18:49.990278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:49.403 17:18:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:49.661 17:18:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:49.661 17:18:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:49.661 17:18:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:49.661 17:18:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:50.597 Initializing NVMe Controllers 00:17:50.597 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:50.597 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:50.597 Initialization complete. Launching workers. 00:17:50.597 ======================================================== 00:17:50.597 Latency(us) 00:17:50.597 Device Information : IOPS MiB/s Average min max 00:17:50.597 PCIE (0000:00:10.0) NSID 1 from core 0: 23009.00 89.88 1390.31 245.77 8070.07 00:17:50.597 ======================================================== 00:17:50.597 Total : 23009.00 89.88 1390.31 245.77 8070.07 00:17:50.597 00:17:50.597 17:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:51.975 Initializing NVMe Controllers 00:17:51.975 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.975 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:51.975 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:51.975 Initialization complete. Launching workers. 00:17:51.975 ======================================================== 00:17:51.975 Latency(us) 00:17:51.975 Device Information : IOPS MiB/s Average min max 00:17:51.975 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3753.00 14.66 266.11 96.24 7140.51 00:17:51.975 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.00 0.49 7974.99 4993.37 12008.25 00:17:51.975 ======================================================== 00:17:51.975 Total : 3879.00 15.15 516.52 96.24 12008.25 00:17:51.975 00:17:51.975 17:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:53.352 Initializing NVMe Controllers 00:17:53.352 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.352 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:53.352 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:53.352 Initialization complete. Launching workers. 00:17:53.352 ======================================================== 00:17:53.352 Latency(us) 00:17:53.352 Device Information : IOPS MiB/s Average min max 00:17:53.352 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8726.27 34.09 3667.29 718.09 9382.63 00:17:53.352 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3950.24 15.43 8136.42 6454.36 16725.74 00:17:53.352 ======================================================== 00:17:53.352 Total : 12676.50 49.52 5059.95 718.09 16725.74 00:17:53.352 00:17:53.352 17:18:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:53.352 17:18:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:55.897 Initializing NVMe Controllers 00:17:55.897 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.897 Controller IO queue size 128, less than required. 00:17:55.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:55.897 Controller IO queue size 128, less than required. 00:17:55.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:55.897 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:55.897 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:55.897 Initialization complete. Launching workers. 00:17:55.897 ======================================================== 00:17:55.897 Latency(us) 00:17:55.897 Device Information : IOPS MiB/s Average min max 00:17:55.897 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1764.35 441.09 73387.47 36893.66 104807.09 00:17:55.897 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 669.94 167.49 200104.08 77217.77 318308.29 00:17:55.897 ======================================================== 00:17:55.897 Total : 2434.29 608.57 108261.21 36893.66 318308.29 00:17:55.897 00:17:55.897 17:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:56.156 Initializing NVMe Controllers 00:17:56.156 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.156 Controller IO queue size 128, less than required. 00:17:56.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:56.156 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:56.156 Controller IO queue size 128, less than required. 00:17:56.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:56.156 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:56.156 WARNING: Some requested NVMe devices were skipped 00:17:56.156 No valid NVMe controllers or AIO or URING devices found 00:17:56.156 17:18:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:58.694 Initializing NVMe Controllers 00:17:58.694 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:58.694 Controller IO queue size 128, less than required. 00:17:58.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:58.694 Controller IO queue size 128, less than required. 00:17:58.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:58.694 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:58.694 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:58.694 Initialization complete. Launching workers. 00:17:58.694 00:17:58.694 ==================== 00:17:58.694 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:58.694 TCP transport: 00:17:58.694 polls: 8425 00:17:58.694 idle_polls: 4657 00:17:58.694 sock_completions: 3768 00:17:58.694 nvme_completions: 6079 00:17:58.694 submitted_requests: 9022 00:17:58.694 queued_requests: 1 00:17:58.694 00:17:58.694 ==================== 00:17:58.694 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:58.694 TCP transport: 00:17:58.694 polls: 8662 00:17:58.694 idle_polls: 4781 00:17:58.694 sock_completions: 3881 00:17:58.694 nvme_completions: 6705 00:17:58.694 submitted_requests: 10112 00:17:58.694 queued_requests: 1 00:17:58.694 ======================================================== 00:17:58.694 Latency(us) 00:17:58.694 Device Information : IOPS MiB/s Average min max 00:17:58.694 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1519.46 379.86 86629.30 41917.17 147573.57 00:17:58.694 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1675.95 418.99 76708.31 27924.76 107286.29 00:17:58.694 ======================================================== 00:17:58.694 Total : 3195.41 798.85 81425.87 27924.76 147573.57 00:17:58.694 00:17:58.694 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:58.957 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.261 rmmod nvme_tcp 00:17:59.261 rmmod nvme_fabrics 00:17:59.261 rmmod nvme_keyring 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74343 ']' 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74343 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 74343 ']' 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 74343 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74343 00:17:59.261 killing process with pid 74343 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74343' 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 74343 00:17:59.261 17:18:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 74343 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.196 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:00.197 ************************************ 00:18:00.197 END TEST nvmf_perf 00:18:00.197 ************************************ 00:18:00.197 00:18:00.197 real 0m14.356s 00:18:00.197 user 0m51.595s 00:18:00.197 sys 0m4.145s 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.197 ************************************ 00:18:00.197 START TEST nvmf_fio_host 00:18:00.197 ************************************ 00:18:00.197 17:19:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:00.456 * Looking for test storage... 00:18:00.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.456 --rc genhtml_branch_coverage=1 00:18:00.456 --rc genhtml_function_coverage=1 00:18:00.456 --rc genhtml_legend=1 00:18:00.456 --rc geninfo_all_blocks=1 00:18:00.456 --rc geninfo_unexecuted_blocks=1 00:18:00.456 00:18:00.456 ' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.456 --rc genhtml_branch_coverage=1 00:18:00.456 --rc genhtml_function_coverage=1 00:18:00.456 --rc genhtml_legend=1 00:18:00.456 --rc geninfo_all_blocks=1 00:18:00.456 --rc geninfo_unexecuted_blocks=1 00:18:00.456 00:18:00.456 ' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.456 --rc genhtml_branch_coverage=1 00:18:00.456 --rc genhtml_function_coverage=1 00:18:00.456 --rc genhtml_legend=1 00:18:00.456 --rc geninfo_all_blocks=1 00:18:00.456 --rc geninfo_unexecuted_blocks=1 00:18:00.456 00:18:00.456 ' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.456 --rc genhtml_branch_coverage=1 00:18:00.456 --rc genhtml_function_coverage=1 00:18:00.456 --rc genhtml_legend=1 00:18:00.456 --rc geninfo_all_blocks=1 00:18:00.456 --rc geninfo_unexecuted_blocks=1 00:18:00.456 00:18:00.456 ' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.456 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.457 Cannot find device "nvmf_init_br" 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.457 Cannot find device "nvmf_init_br2" 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.457 Cannot find device "nvmf_tgt_br" 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.457 Cannot find device "nvmf_tgt_br2" 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.457 Cannot find device "nvmf_init_br" 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.457 Cannot find device "nvmf_init_br2" 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:00.457 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.716 Cannot find device "nvmf_tgt_br" 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.716 Cannot find device "nvmf_tgt_br2" 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.716 Cannot find device "nvmf_br" 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.716 Cannot find device "nvmf_init_if" 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.716 Cannot find device "nvmf_init_if2" 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.716 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:18:00.976 00:18:00.976 --- 10.0.0.3 ping statistics --- 00:18:00.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.976 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.976 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.976 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:18:00.976 00:18:00.976 --- 10.0.0.4 ping statistics --- 00:18:00.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.976 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:00.976 00:18:00.976 --- 10.0.0.1 ping statistics --- 00:18:00.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.976 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:18:00.976 00:18:00.976 --- 10.0.0.2 ping statistics --- 00:18:00.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.976 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74796 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74796 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 74796 ']' 00:18:00.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.976 17:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.976 [2024-11-04 17:19:01.647584] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:18:00.976 [2024-11-04 17:19:01.647694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.236 [2024-11-04 17:19:01.798891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:01.236 [2024-11-04 17:19:01.861491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.236 [2024-11-04 17:19:01.861535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.236 [2024-11-04 17:19:01.861561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.236 [2024-11-04 17:19:01.861578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.236 [2024-11-04 17:19:01.861603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.236 [2024-11-04 17:19:01.863427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.236 [2024-11-04 17:19:01.863555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.236 [2024-11-04 17:19:01.863819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.236 [2024-11-04 17:19:01.863872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.236 [2024-11-04 17:19:01.922925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.172 17:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:02.172 17:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:18:02.172 17:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:02.172 [2024-11-04 17:19:02.906525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.172 17:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:02.172 17:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.172 17:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.172 17:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:02.740 Malloc1 00:18:02.740 17:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:02.999 17:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:03.259 17:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:03.517 [2024-11-04 17:19:04.077718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.517 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:03.776 17:19:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:03.776 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:03.776 fio-3.35 00:18:03.776 Starting 1 thread 00:18:06.328 00:18:06.328 test: (groupid=0, jobs=1): err= 0: pid=74875: Mon Nov 4 17:19:06 2024 00:18:06.328 read: IOPS=8923, BW=34.9MiB/s (36.5MB/s)(70.0MiB/2007msec) 00:18:06.328 slat (usec): min=2, max=327, avg= 2.62, stdev= 3.39 00:18:06.328 clat (usec): min=2561, max=13277, avg=7443.44, stdev=507.41 00:18:06.328 lat (usec): min=2617, max=13279, avg=7446.06, stdev=507.20 00:18:06.328 clat percentiles (usec): 00:18:06.328 | 1.00th=[ 6390], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 7046], 00:18:06.328 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:18:06.328 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8160], 00:18:06.328 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[11731], 99.95th=[12518], 00:18:06.328 | 99.99th=[13173] 00:18:06.328 bw ( KiB/s): min=34408, max=36536, per=99.99%, avg=35690.00, stdev=910.14, samples=4 00:18:06.328 iops : min= 8602, max= 9134, avg=8922.50, stdev=227.53, samples=4 00:18:06.328 write: IOPS=8940, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2007msec); 0 zone resets 00:18:06.328 slat (usec): min=2, max=259, avg= 2.75, stdev= 2.60 00:18:06.328 clat (usec): min=2409, max=12697, avg=6819.21, stdev=459.00 00:18:06.328 lat (usec): min=2424, max=12700, avg=6821.96, stdev=458.89 00:18:06.328 clat percentiles (usec): 00:18:06.328 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6456], 00:18:06.328 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6915], 00:18:06.328 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7308], 95.00th=[ 7504], 00:18:06.328 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[10814], 99.95th=[11863], 00:18:06.328 | 99.99th=[12649] 00:18:06.328 bw ( KiB/s): min=35160, max=36152, per=100.00%, avg=35760.00, stdev=424.88, samples=4 00:18:06.328 iops : min= 8790, max= 9038, avg=8940.00, stdev=106.22, samples=4 00:18:06.328 lat (msec) : 4=0.08%, 10=99.77%, 20=0.15% 00:18:06.328 cpu : usr=67.40%, sys=24.28%, ctx=10, majf=0, minf=7 00:18:06.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:06.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.328 issued rwts: total=17909,17943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.328 00:18:06.328 Run status group 0 (all jobs): 00:18:06.328 READ: bw=34.9MiB/s (36.5MB/s), 34.9MiB/s-34.9MiB/s (36.5MB/s-36.5MB/s), io=70.0MiB (73.4MB), run=2007-2007msec 00:18:06.328 WRITE: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2007-2007msec 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:06.329 17:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:06.329 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:06.329 fio-3.35 00:18:06.329 Starting 1 thread 00:18:08.863 00:18:08.863 test: (groupid=0, jobs=1): err= 0: pid=74918: Mon Nov 4 17:19:09 2024 00:18:08.863 read: IOPS=7326, BW=114MiB/s (120MB/s)(230MiB/2008msec) 00:18:08.863 slat (usec): min=2, max=120, avg= 3.74, stdev= 2.77 00:18:08.863 clat (usec): min=1443, max=22061, avg=9976.38, stdev=2972.08 00:18:08.863 lat (usec): min=1447, max=22064, avg=9980.13, stdev=2972.14 00:18:08.863 clat percentiles (usec): 00:18:08.863 | 1.00th=[ 4359], 5.00th=[ 5276], 10.00th=[ 6128], 20.00th=[ 7439], 00:18:08.863 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10552], 00:18:08.863 | 70.00th=[11338], 80.00th=[12387], 90.00th=[13829], 95.00th=[15008], 00:18:08.863 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20317], 99.95th=[21365], 00:18:08.863 | 99.99th=[21890] 00:18:08.863 bw ( KiB/s): min=58048, max=61376, per=51.03%, avg=59816.00, stdev=1456.00, samples=4 00:18:08.863 iops : min= 3628, max= 3836, avg=3738.50, stdev=91.00, samples=4 00:18:08.863 write: IOPS=4147, BW=64.8MiB/s (67.9MB/s)(122MiB/1884msec); 0 zone resets 00:18:08.863 slat (usec): min=31, max=245, avg=38.14, stdev= 9.20 00:18:08.863 clat (usec): min=1246, max=25800, avg=13220.73, stdev=2889.67 00:18:08.863 lat (usec): min=1279, max=25835, avg=13258.87, stdev=2889.76 00:18:08.863 clat percentiles (usec): 00:18:08.863 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10683], 00:18:08.863 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12911], 60.00th=[13698], 00:18:08.863 | 70.00th=[14615], 80.00th=[15795], 90.00th=[17171], 95.00th=[18482], 00:18:08.863 | 99.00th=[20579], 99.50th=[22414], 99.90th=[24773], 99.95th=[25035], 00:18:08.863 | 99.99th=[25822] 00:18:08.863 bw ( KiB/s): min=60640, max=64448, per=93.48%, avg=62024.00, stdev=1669.81, samples=4 00:18:08.863 iops : min= 3790, max= 4028, avg=3876.50, stdev=104.36, samples=4 00:18:08.863 lat (msec) : 2=0.03%, 4=0.24%, 10=38.18%, 20=60.79%, 50=0.77% 00:18:08.863 cpu : usr=82.32%, sys=13.79%, ctx=50, majf=0, minf=6 00:18:08.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:08.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.863 issued rwts: total=14712,7813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.863 00:18:08.863 Run status group 0 (all jobs): 00:18:08.863 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=230MiB (241MB), run=2008-2008msec 00:18:08.863 WRITE: bw=64.8MiB/s (67.9MB/s), 64.8MiB/s-64.8MiB/s (67.9MB/s-67.9MB/s), io=122MiB (128MB), run=1884-1884msec 00:18:08.863 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.122 rmmod nvme_tcp 00:18:09.122 rmmod nvme_fabrics 00:18:09.122 rmmod nvme_keyring 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74796 ']' 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74796 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 74796 ']' 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 74796 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74796 00:18:09.122 killing process with pid 74796 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74796' 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 74796 00:18:09.122 17:19:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 74796 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:09.382 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:09.661 ************************************ 00:18:09.661 END TEST nvmf_fio_host 00:18:09.661 ************************************ 00:18:09.661 00:18:09.661 real 0m9.455s 00:18:09.661 user 0m37.616s 00:18:09.661 sys 0m2.529s 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:09.661 17:19:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:09.662 17:19:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.931 ************************************ 00:18:09.931 START TEST nvmf_failover 00:18:09.931 ************************************ 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:09.931 * Looking for test storage... 00:18:09.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.931 --rc genhtml_branch_coverage=1 00:18:09.931 --rc genhtml_function_coverage=1 00:18:09.931 --rc genhtml_legend=1 00:18:09.931 --rc geninfo_all_blocks=1 00:18:09.931 --rc geninfo_unexecuted_blocks=1 00:18:09.931 00:18:09.931 ' 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.931 --rc genhtml_branch_coverage=1 00:18:09.931 --rc genhtml_function_coverage=1 00:18:09.931 --rc genhtml_legend=1 00:18:09.931 --rc geninfo_all_blocks=1 00:18:09.931 --rc geninfo_unexecuted_blocks=1 00:18:09.931 00:18:09.931 ' 00:18:09.931 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.932 --rc genhtml_branch_coverage=1 00:18:09.932 --rc genhtml_function_coverage=1 00:18:09.932 --rc genhtml_legend=1 00:18:09.932 --rc geninfo_all_blocks=1 00:18:09.932 --rc geninfo_unexecuted_blocks=1 00:18:09.932 00:18:09.932 ' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:09.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.932 --rc genhtml_branch_coverage=1 00:18:09.932 --rc genhtml_function_coverage=1 00:18:09.932 --rc genhtml_legend=1 00:18:09.932 --rc geninfo_all_blocks=1 00:18:09.932 --rc geninfo_unexecuted_blocks=1 00:18:09.932 00:18:09.932 ' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.932 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.932 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:09.933 Cannot find device "nvmf_init_br" 00:18:09.933 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:09.933 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:09.933 Cannot find device "nvmf_init_br2" 00:18:09.933 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:09.933 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:09.933 Cannot find device "nvmf_tgt_br" 00:18:09.933 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:09.933 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.933 Cannot find device "nvmf_tgt_br2" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:10.192 Cannot find device "nvmf_init_br" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:10.192 Cannot find device "nvmf_init_br2" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:10.192 Cannot find device "nvmf_tgt_br" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:10.192 Cannot find device "nvmf_tgt_br2" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:10.192 Cannot find device "nvmf_br" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:10.192 Cannot find device "nvmf_init_if" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:10.192 Cannot find device "nvmf_init_if2" 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.192 17:19:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:10.451 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.451 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:18:10.451 00:18:10.451 --- 10.0.0.3 ping statistics --- 00:18:10.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.451 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:10.451 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:10.451 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:18:10.451 00:18:10.451 --- 10.0.0.4 ping statistics --- 00:18:10.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.451 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:10.451 00:18:10.451 --- 10.0.0.1 ping statistics --- 00:18:10.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.451 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:10.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:18:10.451 00:18:10.451 --- 10.0.0.2 ping statistics --- 00:18:10.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.451 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:10.451 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:10.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75194 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75194 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75194 ']' 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:10.452 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:10.452 [2024-11-04 17:19:11.124900] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:18:10.452 [2024-11-04 17:19:11.125177] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.711 [2024-11-04 17:19:11.277850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:10.711 [2024-11-04 17:19:11.344482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.711 [2024-11-04 17:19:11.344837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.711 [2024-11-04 17:19:11.344873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.711 [2024-11-04 17:19:11.344884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.711 [2024-11-04 17:19:11.344894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.711 [2024-11-04 17:19:11.346296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.711 [2024-11-04 17:19:11.346446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.711 [2024-11-04 17:19:11.346452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.711 [2024-11-04 17:19:11.405136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.711 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:10.711 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:18:10.711 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.711 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.711 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:10.970 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.970 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:11.228 [2024-11-04 17:19:11.812196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.228 17:19:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:11.487 Malloc0 00:18:11.487 17:19:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.746 17:19:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.004 17:19:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:12.263 [2024-11-04 17:19:12.943960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:12.263 17:19:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:12.521 [2024-11-04 17:19:13.184165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:12.521 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:12.780 [2024-11-04 17:19:13.428387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:12.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75244 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75244 /var/tmp/bdevperf.sock 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75244 ']' 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.780 17:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:13.717 17:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:13.717 17:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:18:13.717 17:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:13.976 NVMe0n1 00:18:13.976 17:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:14.543 00:18:14.543 17:19:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75268 00:18:14.543 17:19:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.543 17:19:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:15.481 17:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:15.740 17:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:19.026 17:19:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:19.026 00:18:19.026 17:19:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:19.596 17:19:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:22.919 17:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:22.919 [2024-11-04 17:19:23.383346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:22.920 17:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:23.856 17:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:24.114 17:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75268 00:18:30.686 { 00:18:30.686 "results": [ 00:18:30.686 { 00:18:30.686 "job": "NVMe0n1", 00:18:30.686 "core_mask": "0x1", 00:18:30.686 "workload": "verify", 00:18:30.686 "status": "finished", 00:18:30.686 "verify_range": { 00:18:30.686 "start": 0, 00:18:30.686 "length": 16384 00:18:30.686 }, 00:18:30.686 "queue_depth": 128, 00:18:30.686 "io_size": 4096, 00:18:30.686 "runtime": 15.011003, 00:18:30.686 "iops": 9422.08858395405, 00:18:30.686 "mibps": 36.80503353107051, 00:18:30.686 "io_failed": 3413, 00:18:30.686 "io_timeout": 0, 00:18:30.686 "avg_latency_us": 13233.434545253709, 00:18:30.686 "min_latency_us": 633.0181818181818, 00:18:30.686 "max_latency_us": 17515.985454545455 00:18:30.686 } 00:18:30.686 ], 00:18:30.686 "core_count": 1 00:18:30.686 } 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75244 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75244 ']' 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75244 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75244 00:18:30.686 killing process with pid 75244 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75244' 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75244 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75244 00:18:30.686 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:30.686 [2024-11-04 17:19:13.505364] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:18:30.686 [2024-11-04 17:19:13.505469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75244 ] 00:18:30.686 [2024-11-04 17:19:13.659424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.686 [2024-11-04 17:19:13.744128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.686 [2024-11-04 17:19:13.824064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.686 Running I/O for 15 seconds... 00:18:30.686 7904.00 IOPS, 30.88 MiB/s [2024-11-04T17:19:31.490Z] [2024-11-04 17:19:16.396836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.686 [2024-11-04 17:19:16.396934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.686 [2024-11-04 17:19:16.396969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.686 [2024-11-04 17:19:16.396988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.687 [2024-11-04 17:19:16.397767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.397805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.397843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.397880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.397929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.397981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.687 [2024-11-04 17:19:16.398390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.687 [2024-11-04 17:19:16.398408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.398453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.398960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.398991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.688 [2024-11-04 17:19:16.399868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.688 [2024-11-04 17:19:16.399887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.688 [2024-11-04 17:19:16.399904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.399955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.399981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.400882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.400941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.400960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.400976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.401021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.401066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.401100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.401134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.689 [2024-11-04 17:19:16.401170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.401204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.401238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.401286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.401328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.401363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.689 [2024-11-04 17:19:16.401382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.689 [2024-11-04 17:19:16.401398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.690 [2024-11-04 17:19:16.401773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fc0 is same with the state(6) to be set 00:18:30.690 [2024-11-04 17:19:16.401814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.401828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.401842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75896 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.401860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.401893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.401923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.401957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.401980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.401993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.402005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.402032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.402063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.402075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.402091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.402121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.402134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.402155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.402185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.402197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.402214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.402242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.402267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.402285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.402315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.402327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.402343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.690 [2024-11-04 17:19:16.402372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.690 [2024-11-04 17:19:16.402384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:18:30.690 [2024-11-04 17:19:16.402400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402477] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:30.690 [2024-11-04 17:19:16.402585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.690 [2024-11-04 17:19:16.402612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.690 [2024-11-04 17:19:16.402655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.690 [2024-11-04 17:19:16.402707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.690 [2024-11-04 17:19:16.402742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:16.402759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:30.690 [2024-11-04 17:19:16.402815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240d710 (9): Bad file descriptor 00:18:30.690 [2024-11-04 17:19:16.406417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:30.690 [2024-11-04 17:19:16.431954] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:30.690 8363.50 IOPS, 32.67 MiB/s [2024-11-04T17:19:31.494Z] 8743.67 IOPS, 34.15 MiB/s [2024-11-04T17:19:31.494Z] 8991.75 IOPS, 35.12 MiB/s [2024-11-04T17:19:31.494Z] [2024-11-04 17:19:20.083296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.690 [2024-11-04 17:19:20.083414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:20.083452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.690 [2024-11-04 17:19:20.083470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:20.083489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.690 [2024-11-04 17:19:20.083505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:20.083524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.690 [2024-11-04 17:19:20.083540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:20.083558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.690 [2024-11-04 17:19:20.083575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.690 [2024-11-04 17:19:20.083592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.690 [2024-11-04 17:19:20.083609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.083975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.083991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.691 [2024-11-04 17:19:20.084718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.691 [2024-11-04 17:19:20.084958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.691 [2024-11-04 17:19:20.084975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.084991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.085746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.085944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.085978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.086028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.086062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.692 [2024-11-04 17:19:20.086110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.692 [2024-11-04 17:19:20.086617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.692 [2024-11-04 17:19:20.086634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.086671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.086723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.086759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.086794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.086831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.086882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.086917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.086960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.086979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.086995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.693 [2024-11-04 17:19:20.087674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.693 [2024-11-04 17:19:20.087941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.087959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aa9e0 is same with the state(6) to be set 00:18:30.693 [2024-11-04 17:19:20.087981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.693 [2024-11-04 17:19:20.087994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.693 [2024-11-04 17:19:20.088007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:18:30.693 [2024-11-04 17:19:20.088023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.088040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.693 [2024-11-04 17:19:20.088053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.693 [2024-11-04 17:19:20.088066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99688 len:8 PRP1 0x0 PRP2 0x0 00:18:30.693 [2024-11-04 17:19:20.088086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.088102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.693 [2024-11-04 17:19:20.088115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.693 [2024-11-04 17:19:20.088128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99696 len:8 PRP1 0x0 PRP2 0x0 00:18:30.693 [2024-11-04 17:19:20.088144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.693 [2024-11-04 17:19:20.088159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99704 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99712 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99720 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99728 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99736 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99744 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99752 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99760 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99768 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99776 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99784 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.088905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.088930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.088979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.088994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99792 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.089011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.089029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.089042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.089073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99800 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.089091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.089109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.694 [2024-11-04 17:19:20.089123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.694 [2024-11-04 17:19:20.089140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99808 len:8 PRP1 0x0 PRP2 0x0 00:18:30.694 [2024-11-04 17:19:20.089168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.089254] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:30.694 [2024-11-04 17:19:20.089358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.694 [2024-11-04 17:19:20.089389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.089410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.694 [2024-11-04 17:19:20.089428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.089447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.694 [2024-11-04 17:19:20.089479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.089498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.694 [2024-11-04 17:19:20.089516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:20.089533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:30.694 [2024-11-04 17:19:20.089641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240d710 (9): Bad file descriptor 00:18:30.694 [2024-11-04 17:19:20.093699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:30.694 [2024-11-04 17:19:20.129147] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:30.694 9013.60 IOPS, 35.21 MiB/s [2024-11-04T17:19:31.498Z] 9154.00 IOPS, 35.76 MiB/s [2024-11-04T17:19:31.498Z] 9263.43 IOPS, 36.19 MiB/s [2024-11-04T17:19:31.498Z] 9329.50 IOPS, 36.44 MiB/s [2024-11-04T17:19:31.498Z] 9392.00 IOPS, 36.69 MiB/s [2024-11-04T17:19:31.498Z] [2024-11-04 17:19:24.678308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.694 [2024-11-04 17:19:24.678824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.694 [2024-11-04 17:19:24.678841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.678860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.678877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.678895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.678913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.678932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.678949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.678967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.678983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.679740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.679977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.679994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.680029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.695 [2024-11-04 17:19:24.680073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.680128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.680165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.680201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.680237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.680278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.680335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.695 [2024-11-04 17:19:24.680371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.695 [2024-11-04 17:19:24.680390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.680967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.680983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.696 [2024-11-04 17:19:24.681420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.696 [2024-11-04 17:19:24.681818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.696 [2024-11-04 17:19:24.681835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.681855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.681872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.681907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.681925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.681958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.681975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.681993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.697 [2024-11-04 17:19:24.682697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.697 [2024-11-04 17:19:24.682939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.682957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24aa6a0 is same with the state(6) to be set 00:18:30.697 [2024-11-04 17:19:24.682977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.697 [2024-11-04 17:19:24.682991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.697 [2024-11-04 17:19:24.683005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74248 len:8 PRP1 0x0 PRP2 0x0 00:18:30.697 [2024-11-04 17:19:24.683028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.683047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.697 [2024-11-04 17:19:24.683060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.697 [2024-11-04 17:19:24.683073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74640 len:8 PRP1 0x0 PRP2 0x0 00:18:30.697 [2024-11-04 17:19:24.683089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.683114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.697 [2024-11-04 17:19:24.683128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.697 [2024-11-04 17:19:24.683141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74648 len:8 PRP1 0x0 PRP2 0x0 00:18:30.697 [2024-11-04 17:19:24.683156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.683173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.697 [2024-11-04 17:19:24.683186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.697 [2024-11-04 17:19:24.683199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74656 len:8 PRP1 0x0 PRP2 0x0 00:18:30.697 [2024-11-04 17:19:24.683230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.683250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.697 [2024-11-04 17:19:24.683263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.697 [2024-11-04 17:19:24.683277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74664 len:8 PRP1 0x0 PRP2 0x0 00:18:30.697 [2024-11-04 17:19:24.683292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.683309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.697 [2024-11-04 17:19:24.683321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.697 [2024-11-04 17:19:24.683334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74672 len:8 PRP1 0x0 PRP2 0x0 00:18:30.697 [2024-11-04 17:19:24.683350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.697 [2024-11-04 17:19:24.683378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74680 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74688 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74696 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74704 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74712 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74720 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74728 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74736 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.683929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74744 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.683961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.683978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.683990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.684002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.684018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.684034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:30.698 [2024-11-04 17:19:24.684047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:30.698 [2024-11-04 17:19:24.684059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74760 len:8 PRP1 0x0 PRP2 0x0 00:18:30.698 [2024-11-04 17:19:24.684080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.684160] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:30.698 [2024-11-04 17:19:24.684261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.698 [2024-11-04 17:19:24.684300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.684320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.698 [2024-11-04 17:19:24.684337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.684353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.698 [2024-11-04 17:19:24.684369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.684387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.698 [2024-11-04 17:19:24.684403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.698 [2024-11-04 17:19:24.684419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:30.698 [2024-11-04 17:19:24.684467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240d710 (9): Bad file descriptor 00:18:30.698 [2024-11-04 17:19:24.688332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:30.698 [2024-11-04 17:19:24.725904] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:30.698 9368.30 IOPS, 36.59 MiB/s [2024-11-04T17:19:31.502Z] 9376.00 IOPS, 36.62 MiB/s [2024-11-04T17:19:31.502Z] 9385.58 IOPS, 36.66 MiB/s [2024-11-04T17:19:31.502Z] 9402.69 IOPS, 36.73 MiB/s [2024-11-04T17:19:31.502Z] 9422.93 IOPS, 36.81 MiB/s [2024-11-04T17:19:31.502Z] 9422.60 IOPS, 36.81 MiB/s 00:18:30.698 Latency(us) 00:18:30.698 [2024-11-04T17:19:31.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.698 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.698 Verification LBA range: start 0x0 length 0x4000 00:18:30.698 NVMe0n1 : 15.01 9422.09 36.81 227.37 0.00 13233.43 633.02 17515.99 00:18:30.698 [2024-11-04T17:19:31.502Z] =================================================================================================================== 00:18:30.698 [2024-11-04T17:19:31.502Z] Total : 9422.09 36.81 227.37 0.00 13233.43 633.02 17515.99 00:18:30.698 Received shutdown signal, test time was about 15.000000 seconds 00:18:30.698 00:18:30.698 Latency(us) 00:18:30.698 [2024-11-04T17:19:31.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.698 [2024-11-04T17:19:31.502Z] =================================================================================================================== 00:18:30.698 [2024-11-04T17:19:31.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75442 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75442 /var/tmp/bdevperf.sock 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75442 ']' 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:30.698 17:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:30.958 17:19:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:30.958 17:19:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:18:30.958 17:19:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:31.216 [2024-11-04 17:19:31.859819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:31.216 17:19:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:31.475 [2024-11-04 17:19:32.152062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:31.475 17:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:31.733 NVMe0n1 00:18:31.733 17:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:31.992 00:18:32.251 17:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:32.509 00:18:32.509 17:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:32.509 17:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:32.768 17:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:33.027 17:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:36.339 17:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:36.339 17:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:36.339 17:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75524 00:18:36.339 17:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.339 17:19:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75524 00:18:37.714 { 00:18:37.714 "results": [ 00:18:37.714 { 00:18:37.714 "job": "NVMe0n1", 00:18:37.714 "core_mask": "0x1", 00:18:37.714 "workload": "verify", 00:18:37.714 "status": "finished", 00:18:37.714 "verify_range": { 00:18:37.714 "start": 0, 00:18:37.714 "length": 16384 00:18:37.714 }, 00:18:37.714 "queue_depth": 128, 00:18:37.714 "io_size": 4096, 00:18:37.714 "runtime": 1.010829, 00:18:37.714 "iops": 7001.184176552117, 00:18:37.714 "mibps": 27.348375689656706, 00:18:37.714 "io_failed": 0, 00:18:37.714 "io_timeout": 0, 00:18:37.714 "avg_latency_us": 18210.426536154253, 00:18:37.714 "min_latency_us": 1906.5018181818182, 00:18:37.714 "max_latency_us": 18588.392727272727 00:18:37.714 } 00:18:37.714 ], 00:18:37.714 "core_count": 1 00:18:37.714 } 00:18:37.714 17:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:37.714 [2024-11-04 17:19:30.559456] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:18:37.714 [2024-11-04 17:19:30.560337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75442 ] 00:18:37.714 [2024-11-04 17:19:30.702884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.714 [2024-11-04 17:19:30.761616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.714 [2024-11-04 17:19:30.816977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.714 [2024-11-04 17:19:33.679621] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:37.714 [2024-11-04 17:19:33.679744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.714 [2024-11-04 17:19:33.679770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.714 [2024-11-04 17:19:33.679790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.714 [2024-11-04 17:19:33.679806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.714 [2024-11-04 17:19:33.679820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.714 [2024-11-04 17:19:33.679834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.714 [2024-11-04 17:19:33.679849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.714 [2024-11-04 17:19:33.679863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.714 [2024-11-04 17:19:33.679878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:37.714 [2024-11-04 17:19:33.679944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:37.714 [2024-11-04 17:19:33.680000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d3710 (9): Bad file descriptor 00:18:37.714 [2024-11-04 17:19:33.682651] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:37.714 Running I/O for 1 seconds... 00:18:37.714 6949.00 IOPS, 27.14 MiB/s 00:18:37.714 Latency(us) 00:18:37.714 [2024-11-04T17:19:38.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.714 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:37.714 Verification LBA range: start 0x0 length 0x4000 00:18:37.715 NVMe0n1 : 1.01 7001.18 27.35 0.00 0.00 18210.43 1906.50 18588.39 00:18:37.715 [2024-11-04T17:19:38.519Z] =================================================================================================================== 00:18:37.715 [2024-11-04T17:19:38.519Z] Total : 7001.18 27.35 0.00 0.00 18210.43 1906.50 18588.39 00:18:37.715 17:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:37.715 17:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:37.715 17:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.973 17:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:37.973 17:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:38.232 17:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:38.495 17:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75442 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75442 ']' 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75442 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75442 00:18:41.791 killing process with pid 75442 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75442' 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75442 00:18:41.791 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75442 00:18:42.049 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:42.049 17:19:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.308 rmmod nvme_tcp 00:18:42.308 rmmod nvme_fabrics 00:18:42.308 rmmod nvme_keyring 00:18:42.308 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75194 ']' 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75194 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75194 ']' 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75194 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75194 00:18:42.568 killing process with pid 75194 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75194' 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75194 00:18:42.568 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75194 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:42.827 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:43.086 ************************************ 00:18:43.086 END TEST nvmf_failover 00:18:43.086 ************************************ 00:18:43.086 00:18:43.086 real 0m33.247s 00:18:43.086 user 2m7.940s 00:18:43.086 sys 0m6.187s 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.086 ************************************ 00:18:43.086 START TEST nvmf_host_discovery 00:18:43.086 ************************************ 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:43.086 * Looking for test storage... 00:18:43.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:18:43.086 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.348 --rc genhtml_branch_coverage=1 00:18:43.348 --rc genhtml_function_coverage=1 00:18:43.348 --rc genhtml_legend=1 00:18:43.348 --rc geninfo_all_blocks=1 00:18:43.348 --rc geninfo_unexecuted_blocks=1 00:18:43.348 00:18:43.348 ' 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.348 --rc genhtml_branch_coverage=1 00:18:43.348 --rc genhtml_function_coverage=1 00:18:43.348 --rc genhtml_legend=1 00:18:43.348 --rc geninfo_all_blocks=1 00:18:43.348 --rc geninfo_unexecuted_blocks=1 00:18:43.348 00:18:43.348 ' 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.348 --rc genhtml_branch_coverage=1 00:18:43.348 --rc genhtml_function_coverage=1 00:18:43.348 --rc genhtml_legend=1 00:18:43.348 --rc geninfo_all_blocks=1 00:18:43.348 --rc geninfo_unexecuted_blocks=1 00:18:43.348 00:18:43.348 ' 00:18:43.348 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:43.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.349 --rc genhtml_branch_coverage=1 00:18:43.349 --rc genhtml_function_coverage=1 00:18:43.349 --rc genhtml_legend=1 00:18:43.349 --rc geninfo_all_blocks=1 00:18:43.349 --rc geninfo_unexecuted_blocks=1 00:18:43.349 00:18:43.349 ' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.349 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:43.349 17:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:43.349 Cannot find device "nvmf_init_br" 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:43.349 Cannot find device "nvmf_init_br2" 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:43.349 Cannot find device "nvmf_tgt_br" 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.349 Cannot find device "nvmf_tgt_br2" 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:43.349 Cannot find device "nvmf_init_br" 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:43.349 Cannot find device "nvmf_init_br2" 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:43.349 Cannot find device "nvmf_tgt_br" 00:18:43.349 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:43.350 Cannot find device "nvmf_tgt_br2" 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:43.350 Cannot find device "nvmf_br" 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:43.350 Cannot find device "nvmf_init_if" 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:43.350 Cannot find device "nvmf_init_if2" 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.350 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:43.616 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.616 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.545 ms 00:18:43.616 00:18:43.616 --- 10.0.0.3 ping statistics --- 00:18:43.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.616 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:18:43.616 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:43.616 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:43.617 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:18:43.617 00:18:43.617 --- 10.0.0.4 ping statistics --- 00:18:43.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.617 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:43.617 00:18:43.617 --- 10.0.0.1 ping statistics --- 00:18:43.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.617 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:43.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:43.617 00:18:43.617 --- 10.0.0.2 ping statistics --- 00:18:43.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.617 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75838 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75838 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75838 ']' 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:43.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:43.617 17:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.875 [2024-11-04 17:19:44.480151] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:18:43.875 [2024-11-04 17:19:44.480473] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.875 [2024-11-04 17:19:44.635251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.134 [2024-11-04 17:19:44.718487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.134 [2024-11-04 17:19:44.718574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.134 [2024-11-04 17:19:44.718588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.134 [2024-11-04 17:19:44.718599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.134 [2024-11-04 17:19:44.718610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.134 [2024-11-04 17:19:44.719150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.134 [2024-11-04 17:19:44.797164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.071 [2024-11-04 17:19:45.576139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.071 [2024-11-04 17:19:45.584333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.071 null0 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.071 null1 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.071 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75876 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75876 /tmp/host.sock 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75876 ']' 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:45.071 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.071 [2024-11-04 17:19:45.664721] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:18:45.071 [2024-11-04 17:19:45.664820] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75876 ] 00:18:45.071 [2024-11-04 17:19:45.813781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.330 [2024-11-04 17:19:45.872916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.330 [2024-11-04 17:19:45.931990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.330 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:45.330 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:18:45.330 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.330 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:45.330 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.330 17:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:45.330 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.331 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.331 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.331 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:45.331 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:45.331 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.331 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:45.589 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.590 [2024-11-04 17:19:46.372479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:45.590 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.849 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:18:45.850 17:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:18:46.418 [2024-11-04 17:19:47.019881] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:46.418 [2024-11-04 17:19:47.020147] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:46.418 [2024-11-04 17:19:47.020249] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:46.418 [2024-11-04 17:19:47.025918] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:46.418 [2024-11-04 17:19:47.080518] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:46.418 [2024-11-04 17:19:47.081561] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1177e50:1 started. 00:18:46.418 [2024-11-04 17:19:47.083696] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:46.418 [2024-11-04 17:19:47.083717] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:46.418 [2024-11-04 17:19:47.088515] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1177e50 was disconnected and freed. delete nvme_qpair. 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:46.986 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.245 [2024-11-04 17:19:47.812701] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1185f80:1 started. 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.245 [2024-11-04 17:19:47.818847] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1185f80 was disconnected and freed. delete nvme_qpair. 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.245 [2024-11-04 17:19:47.916607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:47.245 [2024-11-04 17:19:47.917114] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:47.245 [2024-11-04 17:19:47.917142] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.245 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:47.246 [2024-11-04 17:19:47.923131] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:18:47.246 [2024-11-04 17:19:47.983709] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:47.246 [2024-11-04 17:19:47.983758] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:47.246 [2024-11-04 17:19:47.983768] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:47.246 [2024-11-04 17:19:47.983773] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.246 17:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:47.246 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.545 [2024-11-04 17:19:48.157444] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:47.545 [2024-11-04 17:19:48.159010] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:47.545 [2024-11-04 17:19:48.161241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.545 [2024-11-04 17:19:48.161306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.545 [2024-11-04 17:19:48.161321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.545 [2024-11-04 17:19:48.161331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.545 [2024-11-04 17:19:48.161340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.545 [2024-11-04 17:19:48.161364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.545 [2024-11-04 17:19:48.161390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.545 [2024-11-04 17:19:48.161403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.545 [2024-11-04 17:19:48.161413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154230 is same with the state(6) to be set 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:47.545 [2024-11-04 17:19:48.164101] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:47.545 [2024-11-04 17:19:48.164130] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:47.545 [2024-11-04 17:19:48.164202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154230 (9): Bad file descriptor 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:47.545 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.546 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.805 17:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.181 [2024-11-04 17:19:49.580622] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:49.181 [2024-11-04 17:19:49.580646] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:49.181 [2024-11-04 17:19:49.580664] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:49.181 [2024-11-04 17:19:49.586672] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:49.181 [2024-11-04 17:19:49.645028] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:49.181 [2024-11-04 17:19:49.645899] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x114ceb0:1 started. 00:18:49.181 [2024-11-04 17:19:49.649925] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:49.181 [2024-11-04 17:19:49.649994] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.181 [2024-11-04 17:19:49.651454] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x114ceb0 was disconnected and freed. delete nvme_qpair. 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.181 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.181 request: 00:18:49.181 { 00:18:49.181 "name": "nvme", 00:18:49.181 "trtype": "tcp", 00:18:49.181 "traddr": "10.0.0.3", 00:18:49.181 "adrfam": "ipv4", 00:18:49.181 "trsvcid": "8009", 00:18:49.181 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:49.181 "wait_for_attach": true, 00:18:49.181 "method": "bdev_nvme_start_discovery", 00:18:49.181 "req_id": 1 00:18:49.181 } 00:18:49.181 Got JSON-RPC error response 00:18:49.181 response: 00:18:49.181 { 00:18:49.181 "code": -17, 00:18:49.182 "message": "File exists" 00:18:49.182 } 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.182 request: 00:18:49.182 { 00:18:49.182 "name": "nvme_second", 00:18:49.182 "trtype": "tcp", 00:18:49.182 "traddr": "10.0.0.3", 00:18:49.182 "adrfam": "ipv4", 00:18:49.182 "trsvcid": "8009", 00:18:49.182 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:49.182 "wait_for_attach": true, 00:18:49.182 "method": "bdev_nvme_start_discovery", 00:18:49.182 "req_id": 1 00:18:49.182 } 00:18:49.182 Got JSON-RPC error response 00:18:49.182 response: 00:18:49.182 { 00:18:49.182 "code": -17, 00:18:49.182 "message": "File exists" 00:18:49.182 } 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.182 17:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.559 [2024-11-04 17:19:50.926313] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:50.559 [2024-11-04 17:19:50.926378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1178e40 with addr=10.0.0.3, port=8010 00:18:50.559 [2024-11-04 17:19:50.926403] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:50.559 [2024-11-04 17:19:50.926414] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:50.559 [2024-11-04 17:19:50.926422] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:51.126 [2024-11-04 17:19:51.926321] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.126 [2024-11-04 17:19:51.926389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1178e40 with addr=10.0.0.3, port=8010 00:18:51.126 [2024-11-04 17:19:51.926413] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:51.126 [2024-11-04 17:19:51.926424] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:51.126 [2024-11-04 17:19:51.926432] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:52.520 [2024-11-04 17:19:52.926205] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:52.520 request: 00:18:52.520 { 00:18:52.520 "name": "nvme_second", 00:18:52.520 "trtype": "tcp", 00:18:52.520 "traddr": "10.0.0.3", 00:18:52.520 "adrfam": "ipv4", 00:18:52.520 "trsvcid": "8010", 00:18:52.520 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:52.520 "wait_for_attach": false, 00:18:52.520 "attach_timeout_ms": 3000, 00:18:52.520 "method": "bdev_nvme_start_discovery", 00:18:52.520 "req_id": 1 00:18:52.520 } 00:18:52.520 Got JSON-RPC error response 00:18:52.520 response: 00:18:52.520 { 00:18:52.520 "code": -110, 00:18:52.520 "message": "Connection timed out" 00:18:52.520 } 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:52.520 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:52.521 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75876 00:18:52.521 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:52.521 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.521 17:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.521 rmmod nvme_tcp 00:18:52.521 rmmod nvme_fabrics 00:18:52.521 rmmod nvme_keyring 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75838 ']' 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75838 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 75838 ']' 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 75838 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75838 00:18:52.521 killing process with pid 75838 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75838' 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 75838 00:18:52.521 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 75838 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.780 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:53.039 00:18:53.039 real 0m9.876s 00:18:53.039 user 0m18.080s 00:18:53.039 sys 0m2.097s 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.039 ************************************ 00:18:53.039 END TEST nvmf_host_discovery 00:18:53.039 ************************************ 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.039 ************************************ 00:18:53.039 START TEST nvmf_host_multipath_status 00:18:53.039 ************************************ 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:53.039 * Looking for test storage... 00:18:53.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:18:53.039 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:53.299 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:53.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.300 --rc genhtml_branch_coverage=1 00:18:53.300 --rc genhtml_function_coverage=1 00:18:53.300 --rc genhtml_legend=1 00:18:53.300 --rc geninfo_all_blocks=1 00:18:53.300 --rc geninfo_unexecuted_blocks=1 00:18:53.300 00:18:53.300 ' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:53.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.300 --rc genhtml_branch_coverage=1 00:18:53.300 --rc genhtml_function_coverage=1 00:18:53.300 --rc genhtml_legend=1 00:18:53.300 --rc geninfo_all_blocks=1 00:18:53.300 --rc geninfo_unexecuted_blocks=1 00:18:53.300 00:18:53.300 ' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:53.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.300 --rc genhtml_branch_coverage=1 00:18:53.300 --rc genhtml_function_coverage=1 00:18:53.300 --rc genhtml_legend=1 00:18:53.300 --rc geninfo_all_blocks=1 00:18:53.300 --rc geninfo_unexecuted_blocks=1 00:18:53.300 00:18:53.300 ' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:53.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.300 --rc genhtml_branch_coverage=1 00:18:53.300 --rc genhtml_function_coverage=1 00:18:53.300 --rc genhtml_legend=1 00:18:53.300 --rc geninfo_all_blocks=1 00:18:53.300 --rc geninfo_unexecuted_blocks=1 00:18:53.300 00:18:53.300 ' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:53.300 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:53.301 Cannot find device "nvmf_init_br" 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:53.301 Cannot find device "nvmf_init_br2" 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:53.301 Cannot find device "nvmf_tgt_br" 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:53.301 Cannot find device "nvmf_tgt_br2" 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:53.301 Cannot find device "nvmf_init_br" 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:53.301 Cannot find device "nvmf_init_br2" 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:53.301 Cannot find device "nvmf_tgt_br" 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:53.301 17:19:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:53.301 Cannot find device "nvmf_tgt_br2" 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:53.301 Cannot find device "nvmf_br" 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:53.301 Cannot find device "nvmf_init_if" 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:53.301 Cannot find device "nvmf_init_if2" 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:53.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:53.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:53.301 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:53.560 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:53.560 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:53.560 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:53.560 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:53.560 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:53.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:53.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:18:53.561 00:18:53.561 --- 10.0.0.3 ping statistics --- 00:18:53.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.561 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:53.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:53.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:18:53.561 00:18:53.561 --- 10.0.0.4 ping statistics --- 00:18:53.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.561 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:53.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:53.561 00:18:53.561 --- 10.0.0.1 ping statistics --- 00:18:53.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.561 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:53.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:53.561 00:18:53.561 --- 10.0.0.2 ping statistics --- 00:18:53.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.561 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76368 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76368 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76368 ']' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.561 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:53.820 [2024-11-04 17:19:54.394377] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:18:53.820 [2024-11-04 17:19:54.394479] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.820 [2024-11-04 17:19:54.544163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:53.820 [2024-11-04 17:19:54.597986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.820 [2024-11-04 17:19:54.598065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.820 [2024-11-04 17:19:54.598076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.820 [2024-11-04 17:19:54.598083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.820 [2024-11-04 17:19:54.598090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.821 [2024-11-04 17:19:54.600328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.821 [2024-11-04 17:19:54.600344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.079 [2024-11-04 17:19:54.655359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:54.079 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.079 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:18:54.080 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.080 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:54.080 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:54.080 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.080 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76368 00:18:54.080 17:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:54.338 [2024-11-04 17:19:55.056635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.339 17:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:54.598 Malloc0 00:18:54.857 17:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:55.116 17:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:55.375 17:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:55.633 [2024-11-04 17:19:56.181137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:55.633 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:55.633 [2024-11-04 17:19:56.425290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:55.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76412 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76412 /var/tmp/bdevperf.sock 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76412 ']' 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.892 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:56.203 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.203 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:18:56.203 17:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:56.462 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:56.722 Nvme0n1 00:18:56.722 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:56.980 Nvme0n1 00:18:57.239 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:57.239 17:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.143 17:19:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:59.143 17:19:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:59.401 17:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:59.660 17:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:00.596 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:00.596 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:00.596 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.596 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:01.163 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.163 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:01.163 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.163 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.421 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.421 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.421 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.421 17:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:01.679 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.679 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:01.679 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.679 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:01.937 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.937 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:01.937 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:01.937 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.194 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.194 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:02.194 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.194 17:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:02.452 17:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.452 17:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:02.452 17:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:02.710 17:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:02.969 17:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.347 17:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.607 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.607 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.607 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.607 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.866 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.866 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:04.866 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.866 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:05.126 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.126 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:05.126 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.126 17:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:05.386 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.386 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:05.644 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.644 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.644 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.644 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:05.644 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:06.211 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:06.211 17:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:07.589 17:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:07.589 17:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:07.589 17:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.589 17:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:07.589 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.589 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:07.589 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.589 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:07.848 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.848 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.848 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.848 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:08.107 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.107 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:08.107 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.107 17:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:08.675 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.675 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:08.675 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.675 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:08.934 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.934 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:08.934 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.934 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.193 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.193 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:09.193 17:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:09.453 17:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:09.711 17:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.089 17:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:11.657 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:11.657 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:11.657 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.657 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:11.916 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.916 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:11.916 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.916 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:12.174 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.174 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:12.174 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.174 17:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:12.433 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.433 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:12.433 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:12.433 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.000 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.000 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:13.000 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:13.259 17:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:13.517 17:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:14.487 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:14.487 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:14.487 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.487 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:14.745 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:14.745 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:14.745 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.745 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:15.004 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.004 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:15.004 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.004 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:15.263 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.263 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:15.263 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.263 17:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:15.522 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.522 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:15.522 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.522 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:15.781 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:15.781 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:15.781 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.781 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:16.040 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:16.040 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:16.040 17:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:16.298 17:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:16.867 17:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:17.804 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:17.804 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:17.804 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.804 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:18.062 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:18.062 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:18.062 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:18.062 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.321 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.321 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:18.321 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.321 17:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:18.580 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.580 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:18.580 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.580 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:18.839 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.839 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:18.839 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.839 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:19.098 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.098 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:19.098 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.098 17:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:19.357 17:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.357 17:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:19.615 17:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:19.616 17:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:20.183 17:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:20.442 17:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:21.378 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:21.378 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:21.379 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.379 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:21.638 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.638 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:21.638 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.638 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:21.897 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:21.897 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:21.897 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:21.897 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.156 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.156 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:22.156 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:22.156 17:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.414 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.414 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:22.414 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.414 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:22.980 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.980 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:22.980 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.980 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:22.980 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.980 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:22.980 17:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:23.546 17:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:23.805 17:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:24.741 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:24.741 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:24.741 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.741 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:25.000 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:25.000 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:25.000 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:25.000 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.259 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.260 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:25.260 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.260 17:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:25.518 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.518 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:25.518 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.518 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:25.778 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.778 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:25.778 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.778 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:26.037 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.037 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:26.037 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.037 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:26.295 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.295 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:26.295 17:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:26.554 17:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:26.812 17:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.191 17:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:28.450 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.450 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:28.450 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.450 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:28.730 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.730 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:28.730 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.730 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.988 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.988 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:28.988 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:28.988 17:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.556 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.556 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:29.556 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.556 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:29.556 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.556 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:29.556 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:29.815 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:30.073 17:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:31.452 17:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:31.452 17:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:31.452 17:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.452 17:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:31.452 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.452 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:31.452 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.452 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:31.711 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:31.711 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:31.711 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.711 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:31.971 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.971 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:31.971 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.971 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:32.230 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.230 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:32.230 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.230 17:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:32.489 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.489 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:32.489 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:32.489 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76412 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76412 ']' 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76412 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76412 00:19:32.749 killing process with pid 76412 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76412' 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76412 00:19:32.749 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76412 00:19:32.749 { 00:19:32.749 "results": [ 00:19:32.749 { 00:19:32.749 "job": "Nvme0n1", 00:19:32.749 "core_mask": "0x4", 00:19:32.749 "workload": "verify", 00:19:32.749 "status": "terminated", 00:19:32.749 "verify_range": { 00:19:32.749 "start": 0, 00:19:32.749 "length": 16384 00:19:32.749 }, 00:19:32.749 "queue_depth": 128, 00:19:32.749 "io_size": 4096, 00:19:32.749 "runtime": 35.576064, 00:19:32.749 "iops": 7481.603361181271, 00:19:32.749 "mibps": 29.22501312961434, 00:19:32.749 "io_failed": 0, 00:19:32.749 "io_timeout": 0, 00:19:32.749 "avg_latency_us": 17076.37567992087, 00:19:32.749 "min_latency_us": 796.8581818181818, 00:19:32.749 "max_latency_us": 4026531.84 00:19:32.749 } 00:19:32.749 ], 00:19:32.749 "core_count": 1 00:19:32.749 } 00:19:33.013 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76412 00:19:33.013 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.013 [2024-11-04 17:19:56.494998] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:19:33.013 [2024-11-04 17:19:56.495097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76412 ] 00:19:33.013 [2024-11-04 17:19:56.638871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.013 [2024-11-04 17:19:56.694713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.013 [2024-11-04 17:19:56.750521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.013 Running I/O for 90 seconds... 00:19:33.013 7189.00 IOPS, 28.08 MiB/s [2024-11-04T17:20:33.817Z] 7314.00 IOPS, 28.57 MiB/s [2024-11-04T17:20:33.817Z] 7260.00 IOPS, 28.36 MiB/s [2024-11-04T17:20:33.817Z] 7209.00 IOPS, 28.16 MiB/s [2024-11-04T17:20:33.817Z] 7197.80 IOPS, 28.12 MiB/s [2024-11-04T17:20:33.817Z] 7377.00 IOPS, 28.82 MiB/s [2024-11-04T17:20:33.817Z] 7660.29 IOPS, 29.92 MiB/s [2024-11-04T17:20:33.817Z] 7727.62 IOPS, 30.19 MiB/s [2024-11-04T17:20:33.817Z] 7753.44 IOPS, 30.29 MiB/s [2024-11-04T17:20:33.817Z] 7869.90 IOPS, 30.74 MiB/s [2024-11-04T17:20:33.817Z] 7911.45 IOPS, 30.90 MiB/s [2024-11-04T17:20:33.817Z] 7936.83 IOPS, 31.00 MiB/s [2024-11-04T17:20:33.817Z] 7962.00 IOPS, 31.10 MiB/s [2024-11-04T17:20:33.817Z] 7977.86 IOPS, 31.16 MiB/s [2024-11-04T17:20:33.817Z] 7991.53 IOPS, 31.22 MiB/s [2024-11-04T17:20:33.817Z] [2024-11-04 17:20:13.804166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.013 [2024-11-04 17:20:13.804253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:33.013 [2024-11-04 17:20:13.804330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.013 [2024-11-04 17:20:13.804351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:33.013 [2024-11-04 17:20:13.804373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.013 [2024-11-04 17:20:13.804388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:33.013 [2024-11-04 17:20:13.804410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.013 [2024-11-04 17:20:13.804425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:33.013 [2024-11-04 17:20:13.804445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.804930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.804957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.014 [2024-11-04 17:20:13.805280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.805969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.805991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.806006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:33.014 [2024-11-04 17:20:13.806043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.014 [2024-11-04 17:20:13.806058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.806634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.015 [2024-11-04 17:20:13.806949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.806970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:33.015 [2024-11-04 17:20:13.807723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.015 [2024-11-04 17:20:13.807737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.807765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.807780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.807800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.807814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.807834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.807848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.807884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.807899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.807919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.807934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.807954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.807968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.807989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.808950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.808970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.808984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.809005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.809020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.809040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.809055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.809075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.809089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.809110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.809124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.809144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.809159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.809179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.809200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.810081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.016 [2024-11-04 17:20:13.810109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.810142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.016 [2024-11-04 17:20:13.810160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:33.016 [2024-11-04 17:20:13.810202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:13.810217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:13.810256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:13.810290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:13.810322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:13.810338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:13.810368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:13.810383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:13.810411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:13.810427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:13.810455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:13.810470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:13.810514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:13.810533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:33.017 7946.56 IOPS, 31.04 MiB/s [2024-11-04T17:20:33.821Z] 7479.12 IOPS, 29.22 MiB/s [2024-11-04T17:20:33.821Z] 7063.61 IOPS, 27.59 MiB/s [2024-11-04T17:20:33.821Z] 6691.84 IOPS, 26.14 MiB/s [2024-11-04T17:20:33.821Z] 6406.00 IOPS, 25.02 MiB/s [2024-11-04T17:20:33.821Z] 6499.81 IOPS, 25.39 MiB/s [2024-11-04T17:20:33.821Z] 6580.68 IOPS, 25.71 MiB/s [2024-11-04T17:20:33.821Z] 6664.00 IOPS, 26.03 MiB/s [2024-11-04T17:20:33.821Z] 6772.83 IOPS, 26.46 MiB/s [2024-11-04T17:20:33.821Z] 6876.76 IOPS, 26.86 MiB/s [2024-11-04T17:20:33.821Z] 7035.73 IOPS, 27.48 MiB/s [2024-11-04T17:20:33.821Z] 7112.56 IOPS, 27.78 MiB/s [2024-11-04T17:20:33.821Z] 7149.68 IOPS, 27.93 MiB/s [2024-11-04T17:20:33.821Z] 7175.97 IOPS, 28.03 MiB/s [2024-11-04T17:20:33.821Z] 7224.03 IOPS, 28.22 MiB/s [2024-11-04T17:20:33.821Z] 7282.61 IOPS, 28.45 MiB/s [2024-11-04T17:20:33.821Z] 7350.78 IOPS, 28.71 MiB/s [2024-11-04T17:20:33.821Z] [2024-11-04 17:20:30.856155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.856242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.856385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.856418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.856450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.856482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.856851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.856865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.857270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.857307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.857340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.857390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.857424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.857459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.857497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.857517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.017 [2024-11-04 17:20:30.857532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.859371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.859410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.859455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.859472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.859492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.859517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:33.017 [2024-11-04 17:20:30.859537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.017 [2024-11-04 17:20:30.859552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:33.018 [2024-11-04 17:20:30.859572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.018 [2024-11-04 17:20:30.859587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:33.018 [2024-11-04 17:20:30.859624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.018 [2024-11-04 17:20:30.859639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:33.018 [2024-11-04 17:20:30.859660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.018 [2024-11-04 17:20:30.859674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:33.018 [2024-11-04 17:20:30.859695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.018 [2024-11-04 17:20:30.859710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:33.018 [2024-11-04 17:20:30.859731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.018 [2024-11-04 17:20:30.859746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:33.018 [2024-11-04 17:20:30.859781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.018 [2024-11-04 17:20:30.859796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:33.018 7409.30 IOPS, 28.94 MiB/s [2024-11-04T17:20:33.822Z] 7436.44 IOPS, 29.05 MiB/s [2024-11-04T17:20:33.822Z] 7463.03 IOPS, 29.15 MiB/s [2024-11-04T17:20:33.822Z] Received shutdown signal, test time was about 35.576810 seconds 00:19:33.018 00:19:33.018 Latency(us) 00:19:33.018 [2024-11-04T17:20:33.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.018 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:33.018 Verification LBA range: start 0x0 length 0x4000 00:19:33.018 Nvme0n1 : 35.58 7481.60 29.23 0.00 0.00 17076.38 796.86 4026531.84 00:19:33.018 [2024-11-04T17:20:33.822Z] =================================================================================================================== 00:19:33.018 [2024-11-04T17:20:33.822Z] Total : 7481.60 29.23 0.00 0.00 17076.38 796.86 4026531.84 00:19:33.018 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.277 17:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:33.277 rmmod nvme_tcp 00:19:33.277 rmmod nvme_fabrics 00:19:33.277 rmmod nvme_keyring 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76368 ']' 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76368 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76368 ']' 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76368 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:33.277 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76368 00:19:33.547 killing process with pid 76368 00:19:33.547 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76368' 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76368 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76368 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:33.548 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:33.817 00:19:33.817 real 0m40.903s 00:19:33.817 user 2m12.364s 00:19:33.817 sys 0m12.600s 00:19:33.817 ************************************ 00:19:33.817 END TEST nvmf_host_multipath_status 00:19:33.817 ************************************ 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:33.817 17:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.078 ************************************ 00:19:34.078 START TEST nvmf_discovery_remove_ifc 00:19:34.078 ************************************ 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:34.078 * Looking for test storage... 00:19:34.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.078 --rc genhtml_branch_coverage=1 00:19:34.078 --rc genhtml_function_coverage=1 00:19:34.078 --rc genhtml_legend=1 00:19:34.078 --rc geninfo_all_blocks=1 00:19:34.078 --rc geninfo_unexecuted_blocks=1 00:19:34.078 00:19:34.078 ' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.078 --rc genhtml_branch_coverage=1 00:19:34.078 --rc genhtml_function_coverage=1 00:19:34.078 --rc genhtml_legend=1 00:19:34.078 --rc geninfo_all_blocks=1 00:19:34.078 --rc geninfo_unexecuted_blocks=1 00:19:34.078 00:19:34.078 ' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.078 --rc genhtml_branch_coverage=1 00:19:34.078 --rc genhtml_function_coverage=1 00:19:34.078 --rc genhtml_legend=1 00:19:34.078 --rc geninfo_all_blocks=1 00:19:34.078 --rc geninfo_unexecuted_blocks=1 00:19:34.078 00:19:34.078 ' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.078 --rc genhtml_branch_coverage=1 00:19:34.078 --rc genhtml_function_coverage=1 00:19:34.078 --rc genhtml_legend=1 00:19:34.078 --rc geninfo_all_blocks=1 00:19:34.078 --rc geninfo_unexecuted_blocks=1 00:19:34.078 00:19:34.078 ' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.078 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.079 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.079 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:34.343 Cannot find device "nvmf_init_br" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:34.343 Cannot find device "nvmf_init_br2" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:34.343 Cannot find device "nvmf_tgt_br" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.343 Cannot find device "nvmf_tgt_br2" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:34.343 Cannot find device "nvmf_init_br" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:34.343 Cannot find device "nvmf_init_br2" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:34.343 Cannot find device "nvmf_tgt_br" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:34.343 Cannot find device "nvmf_tgt_br2" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:34.343 Cannot find device "nvmf_br" 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:34.343 17:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:34.343 Cannot find device "nvmf_init_if" 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:34.343 Cannot find device "nvmf_init_if2" 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:34.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:34.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:34.343 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:34.344 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:34.344 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:34.344 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:34.344 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:34.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:34.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:34.604 00:19:34.604 --- 10.0.0.3 ping statistics --- 00:19:34.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.604 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:34.604 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:34.604 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:19:34.604 00:19:34.604 --- 10.0.0.4 ping statistics --- 00:19:34.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.604 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:34.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:34.604 00:19:34.604 --- 10.0.0.1 ping statistics --- 00:19:34.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.604 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:34.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:34.604 00:19:34.604 --- 10.0.0.2 ping statistics --- 00:19:34.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.604 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77271 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77271 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77271 ']' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:34.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:34.604 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.604 [2024-11-04 17:20:35.387745] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:19:34.604 [2024-11-04 17:20:35.388066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.863 [2024-11-04 17:20:35.547909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.863 [2024-11-04 17:20:35.606946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.863 [2024-11-04 17:20:35.607196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.863 [2024-11-04 17:20:35.607243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.863 [2024-11-04 17:20:35.607254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.863 [2024-11-04 17:20:35.607265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.863 [2024-11-04 17:20:35.607739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.124 [2024-11-04 17:20:35.669267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:35.124 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.125 [2024-11-04 17:20:35.786465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.125 [2024-11-04 17:20:35.794671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:35.125 null0 00:19:35.125 [2024-11-04 17:20:35.826470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77301 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77301 /tmp/host.sock 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77301 ']' 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.125 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.125 17:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.125 [2024-11-04 17:20:35.897622] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:19:35.125 [2024-11-04 17:20:35.897713] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77301 ] 00:19:35.384 [2024-11-04 17:20:36.046659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.384 [2024-11-04 17:20:36.115456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.384 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.643 [2024-11-04 17:20:36.232790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.643 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.643 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:35.643 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.643 17:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.581 [2024-11-04 17:20:37.289536] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:36.581 [2024-11-04 17:20:37.289592] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:36.581 [2024-11-04 17:20:37.289630] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:36.581 [2024-11-04 17:20:37.295579] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:36.581 [2024-11-04 17:20:37.350057] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:36.581 [2024-11-04 17:20:37.351145] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d02fb0:1 started. 00:19:36.581 [2024-11-04 17:20:37.352944] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:36.581 [2024-11-04 17:20:37.353015] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:36.581 [2024-11-04 17:20:37.353039] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:36.581 [2024-11-04 17:20:37.353054] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:36.581 [2024-11-04 17:20:37.353077] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.581 [2024-11-04 17:20:37.358373] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d02fb0 was disconnected and freed. delete nvme_qpair. 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.581 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:36.840 17:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:37.777 17:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:39.154 17:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:40.090 17:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:41.026 17:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:41.963 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.223 [2024-11-04 17:20:42.780716] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:42.223 [2024-11-04 17:20:42.780787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.223 [2024-11-04 17:20:42.780801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.223 [2024-11-04 17:20:42.780813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.223 [2024-11-04 17:20:42.780822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.223 [2024-11-04 17:20:42.780831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.223 [2024-11-04 17:20:42.780840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.223 [2024-11-04 17:20:42.780849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.223 [2024-11-04 17:20:42.780858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.223 [2024-11-04 17:20:42.780867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.223 [2024-11-04 17:20:42.780876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.223 [2024-11-04 17:20:42.780899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf240 is same with the state(6) to be set 00:19:42.223 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:42.223 17:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:42.223 [2024-11-04 17:20:42.790711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf240 (9): Bad file descriptor 00:19:42.223 [2024-11-04 17:20:42.800745] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:42.223 [2024-11-04 17:20:42.800769] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:42.223 [2024-11-04 17:20:42.800779] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:42.223 [2024-11-04 17:20:42.800784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:42.223 [2024-11-04 17:20:42.800833] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:43.167 [2024-11-04 17:20:43.814367] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:43.167 [2024-11-04 17:20:43.814483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cdf240 with addr=10.0.0.3, port=4420 00:19:43.167 [2024-11-04 17:20:43.814531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf240 is same with the state(6) to be set 00:19:43.167 [2024-11-04 17:20:43.814604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdf240 (9): Bad file descriptor 00:19:43.167 [2024-11-04 17:20:43.815525] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:43.167 [2024-11-04 17:20:43.815634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:43.167 [2024-11-04 17:20:43.815669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:43.167 [2024-11-04 17:20:43.815691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:43.167 [2024-11-04 17:20:43.815711] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:43.167 [2024-11-04 17:20:43.815724] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:43.167 [2024-11-04 17:20:43.815736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:43.167 [2024-11-04 17:20:43.815757] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:43.167 [2024-11-04 17:20:43.815769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:43.167 17:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:44.105 [2024-11-04 17:20:44.815850] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:44.105 [2024-11-04 17:20:44.815917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:44.105 [2024-11-04 17:20:44.815958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:44.105 [2024-11-04 17:20:44.815984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:44.105 [2024-11-04 17:20:44.815993] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:44.105 [2024-11-04 17:20:44.816002] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:44.105 [2024-11-04 17:20:44.816010] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:44.105 [2024-11-04 17:20:44.816015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:44.105 [2024-11-04 17:20:44.816046] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:44.105 [2024-11-04 17:20:44.816088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.105 [2024-11-04 17:20:44.816102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.105 [2024-11-04 17:20:44.816115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.105 [2024-11-04 17:20:44.816123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.105 [2024-11-04 17:20:44.816132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.105 [2024-11-04 17:20:44.816141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.105 [2024-11-04 17:20:44.816149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.105 [2024-11-04 17:20:44.816157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.105 [2024-11-04 17:20:44.816182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.105 [2024-11-04 17:20:44.816207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.105 [2024-11-04 17:20:44.816231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:44.105 [2024-11-04 17:20:44.816780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6aa20 (9): Bad file descriptor 00:19:44.105 [2024-11-04 17:20:44.817817] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:44.105 [2024-11-04 17:20:44.817847] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.105 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:44.364 17:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:45.301 17:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.301 17:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:45.301 17:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:46.238 [2024-11-04 17:20:46.824298] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:46.238 [2024-11-04 17:20:46.824368] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:46.238 [2024-11-04 17:20:46.824402] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:46.238 [2024-11-04 17:20:46.830344] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:46.238 [2024-11-04 17:20:46.884725] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:46.238 [2024-11-04 17:20:46.885719] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1d0b290:1 started. 00:19:46.238 [2024-11-04 17:20:46.887162] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:46.238 [2024-11-04 17:20:46.887234] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:46.238 [2024-11-04 17:20:46.887258] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:46.238 [2024-11-04 17:20:46.887274] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:46.238 [2024-11-04 17:20:46.887283] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:46.238 [2024-11-04 17:20:46.892906] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1d0b290 was disconnected and freed. delete nvme_qpair. 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77301 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77301 ']' 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77301 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77301 00:19:46.498 killing process with pid 77301 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77301' 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77301 00:19:46.498 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77301 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.757 rmmod nvme_tcp 00:19:46.757 rmmod nvme_fabrics 00:19:46.757 rmmod nvme_keyring 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77271 ']' 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77271 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77271 ']' 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77271 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77271 00:19:46.757 killing process with pid 77271 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77271' 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77271 00:19:46.757 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77271 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:47.016 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:47.275 00:19:47.275 real 0m13.318s 00:19:47.275 user 0m22.464s 00:19:47.275 sys 0m2.540s 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:47.275 17:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:47.275 ************************************ 00:19:47.275 END TEST nvmf_discovery_remove_ifc 00:19:47.275 ************************************ 00:19:47.275 17:20:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:47.275 17:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:47.275 17:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:47.275 17:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.275 ************************************ 00:19:47.275 START TEST nvmf_identify_kernel_target 00:19:47.275 ************************************ 00:19:47.275 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:47.535 * Looking for test storage... 00:19:47.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:47.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.535 --rc genhtml_branch_coverage=1 00:19:47.535 --rc genhtml_function_coverage=1 00:19:47.535 --rc genhtml_legend=1 00:19:47.535 --rc geninfo_all_blocks=1 00:19:47.535 --rc geninfo_unexecuted_blocks=1 00:19:47.535 00:19:47.535 ' 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:47.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.535 --rc genhtml_branch_coverage=1 00:19:47.535 --rc genhtml_function_coverage=1 00:19:47.535 --rc genhtml_legend=1 00:19:47.535 --rc geninfo_all_blocks=1 00:19:47.535 --rc geninfo_unexecuted_blocks=1 00:19:47.535 00:19:47.535 ' 00:19:47.535 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:47.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.535 --rc genhtml_branch_coverage=1 00:19:47.535 --rc genhtml_function_coverage=1 00:19:47.535 --rc genhtml_legend=1 00:19:47.535 --rc geninfo_all_blocks=1 00:19:47.535 --rc geninfo_unexecuted_blocks=1 00:19:47.535 00:19:47.535 ' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:47.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.536 --rc genhtml_branch_coverage=1 00:19:47.536 --rc genhtml_function_coverage=1 00:19:47.536 --rc genhtml_legend=1 00:19:47.536 --rc geninfo_all_blocks=1 00:19:47.536 --rc geninfo_unexecuted_blocks=1 00:19:47.536 00:19:47.536 ' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.536 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:47.536 Cannot find device "nvmf_init_br" 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:47.536 Cannot find device "nvmf_init_br2" 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:47.536 Cannot find device "nvmf_tgt_br" 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.536 Cannot find device "nvmf_tgt_br2" 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:47.536 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:47.796 Cannot find device "nvmf_init_br" 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:47.796 Cannot find device "nvmf_init_br2" 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:47.796 Cannot find device "nvmf_tgt_br" 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:47.796 Cannot find device "nvmf_tgt_br2" 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:47.796 Cannot find device "nvmf_br" 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:47.796 Cannot find device "nvmf_init_if" 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:47.796 Cannot find device "nvmf_init_if2" 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.796 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:48.055 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:48.055 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:48.055 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:48.055 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:48.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:48.056 00:19:48.056 --- 10.0.0.3 ping statistics --- 00:19:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.056 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:48.056 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:48.056 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:19:48.056 00:19:48.056 --- 10.0.0.4 ping statistics --- 00:19:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.056 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:19:48.056 00:19:48.056 --- 10.0.0.1 ping statistics --- 00:19:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.056 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:48.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:48.056 00:19:48.056 --- 10.0.0.2 ping statistics --- 00:19:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.056 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:48.056 17:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:48.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.623 Waiting for block devices as requested 00:19:48.623 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.623 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:48.623 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:48.883 No valid GPT data, bailing 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:48.883 No valid GPT data, bailing 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:48.883 No valid GPT data, bailing 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:48.883 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:49.142 No valid GPT data, bailing 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -a 10.0.0.1 -t tcp -s 4420 00:19:49.142 00:19:49.142 Discovery Log Number of Records 2, Generation counter 2 00:19:49.142 =====Discovery Log Entry 0====== 00:19:49.142 trtype: tcp 00:19:49.142 adrfam: ipv4 00:19:49.142 subtype: current discovery subsystem 00:19:49.142 treq: not specified, sq flow control disable supported 00:19:49.142 portid: 1 00:19:49.142 trsvcid: 4420 00:19:49.142 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:49.142 traddr: 10.0.0.1 00:19:49.142 eflags: none 00:19:49.142 sectype: none 00:19:49.142 =====Discovery Log Entry 1====== 00:19:49.142 trtype: tcp 00:19:49.142 adrfam: ipv4 00:19:49.142 subtype: nvme subsystem 00:19:49.142 treq: not specified, sq flow control disable supported 00:19:49.142 portid: 1 00:19:49.142 trsvcid: 4420 00:19:49.142 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:49.142 traddr: 10.0.0.1 00:19:49.142 eflags: none 00:19:49.142 sectype: none 00:19:49.142 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:49.142 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:49.401 ===================================================== 00:19:49.401 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:49.401 ===================================================== 00:19:49.401 Controller Capabilities/Features 00:19:49.401 ================================ 00:19:49.401 Vendor ID: 0000 00:19:49.401 Subsystem Vendor ID: 0000 00:19:49.401 Serial Number: 5c24f187d27978b5f7f3 00:19:49.401 Model Number: Linux 00:19:49.401 Firmware Version: 6.8.9-20 00:19:49.401 Recommended Arb Burst: 0 00:19:49.401 IEEE OUI Identifier: 00 00 00 00:19:49.401 Multi-path I/O 00:19:49.401 May have multiple subsystem ports: No 00:19:49.401 May have multiple controllers: No 00:19:49.401 Associated with SR-IOV VF: No 00:19:49.401 Max Data Transfer Size: Unlimited 00:19:49.401 Max Number of Namespaces: 0 00:19:49.401 Max Number of I/O Queues: 1024 00:19:49.401 NVMe Specification Version (VS): 1.3 00:19:49.401 NVMe Specification Version (Identify): 1.3 00:19:49.401 Maximum Queue Entries: 1024 00:19:49.401 Contiguous Queues Required: No 00:19:49.401 Arbitration Mechanisms Supported 00:19:49.401 Weighted Round Robin: Not Supported 00:19:49.401 Vendor Specific: Not Supported 00:19:49.401 Reset Timeout: 7500 ms 00:19:49.401 Doorbell Stride: 4 bytes 00:19:49.401 NVM Subsystem Reset: Not Supported 00:19:49.401 Command Sets Supported 00:19:49.401 NVM Command Set: Supported 00:19:49.401 Boot Partition: Not Supported 00:19:49.401 Memory Page Size Minimum: 4096 bytes 00:19:49.401 Memory Page Size Maximum: 4096 bytes 00:19:49.401 Persistent Memory Region: Not Supported 00:19:49.401 Optional Asynchronous Events Supported 00:19:49.401 Namespace Attribute Notices: Not Supported 00:19:49.401 Firmware Activation Notices: Not Supported 00:19:49.401 ANA Change Notices: Not Supported 00:19:49.401 PLE Aggregate Log Change Notices: Not Supported 00:19:49.401 LBA Status Info Alert Notices: Not Supported 00:19:49.401 EGE Aggregate Log Change Notices: Not Supported 00:19:49.401 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.401 Zone Descriptor Change Notices: Not Supported 00:19:49.401 Discovery Log Change Notices: Supported 00:19:49.401 Controller Attributes 00:19:49.401 128-bit Host Identifier: Not Supported 00:19:49.401 Non-Operational Permissive Mode: Not Supported 00:19:49.401 NVM Sets: Not Supported 00:19:49.401 Read Recovery Levels: Not Supported 00:19:49.401 Endurance Groups: Not Supported 00:19:49.401 Predictable Latency Mode: Not Supported 00:19:49.401 Traffic Based Keep ALive: Not Supported 00:19:49.401 Namespace Granularity: Not Supported 00:19:49.401 SQ Associations: Not Supported 00:19:49.401 UUID List: Not Supported 00:19:49.401 Multi-Domain Subsystem: Not Supported 00:19:49.401 Fixed Capacity Management: Not Supported 00:19:49.401 Variable Capacity Management: Not Supported 00:19:49.401 Delete Endurance Group: Not Supported 00:19:49.401 Delete NVM Set: Not Supported 00:19:49.401 Extended LBA Formats Supported: Not Supported 00:19:49.401 Flexible Data Placement Supported: Not Supported 00:19:49.401 00:19:49.401 Controller Memory Buffer Support 00:19:49.401 ================================ 00:19:49.401 Supported: No 00:19:49.401 00:19:49.401 Persistent Memory Region Support 00:19:49.401 ================================ 00:19:49.401 Supported: No 00:19:49.401 00:19:49.401 Admin Command Set Attributes 00:19:49.401 ============================ 00:19:49.401 Security Send/Receive: Not Supported 00:19:49.401 Format NVM: Not Supported 00:19:49.401 Firmware Activate/Download: Not Supported 00:19:49.401 Namespace Management: Not Supported 00:19:49.401 Device Self-Test: Not Supported 00:19:49.401 Directives: Not Supported 00:19:49.401 NVMe-MI: Not Supported 00:19:49.401 Virtualization Management: Not Supported 00:19:49.401 Doorbell Buffer Config: Not Supported 00:19:49.401 Get LBA Status Capability: Not Supported 00:19:49.401 Command & Feature Lockdown Capability: Not Supported 00:19:49.401 Abort Command Limit: 1 00:19:49.401 Async Event Request Limit: 1 00:19:49.401 Number of Firmware Slots: N/A 00:19:49.401 Firmware Slot 1 Read-Only: N/A 00:19:49.401 Firmware Activation Without Reset: N/A 00:19:49.401 Multiple Update Detection Support: N/A 00:19:49.401 Firmware Update Granularity: No Information Provided 00:19:49.401 Per-Namespace SMART Log: No 00:19:49.401 Asymmetric Namespace Access Log Page: Not Supported 00:19:49.401 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:49.401 Command Effects Log Page: Not Supported 00:19:49.401 Get Log Page Extended Data: Supported 00:19:49.401 Telemetry Log Pages: Not Supported 00:19:49.401 Persistent Event Log Pages: Not Supported 00:19:49.401 Supported Log Pages Log Page: May Support 00:19:49.401 Commands Supported & Effects Log Page: Not Supported 00:19:49.401 Feature Identifiers & Effects Log Page:May Support 00:19:49.401 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.401 Data Area 4 for Telemetry Log: Not Supported 00:19:49.401 Error Log Page Entries Supported: 1 00:19:49.401 Keep Alive: Not Supported 00:19:49.401 00:19:49.401 NVM Command Set Attributes 00:19:49.401 ========================== 00:19:49.401 Submission Queue Entry Size 00:19:49.401 Max: 1 00:19:49.401 Min: 1 00:19:49.401 Completion Queue Entry Size 00:19:49.401 Max: 1 00:19:49.401 Min: 1 00:19:49.401 Number of Namespaces: 0 00:19:49.401 Compare Command: Not Supported 00:19:49.401 Write Uncorrectable Command: Not Supported 00:19:49.401 Dataset Management Command: Not Supported 00:19:49.401 Write Zeroes Command: Not Supported 00:19:49.401 Set Features Save Field: Not Supported 00:19:49.401 Reservations: Not Supported 00:19:49.401 Timestamp: Not Supported 00:19:49.401 Copy: Not Supported 00:19:49.401 Volatile Write Cache: Not Present 00:19:49.401 Atomic Write Unit (Normal): 1 00:19:49.401 Atomic Write Unit (PFail): 1 00:19:49.401 Atomic Compare & Write Unit: 1 00:19:49.401 Fused Compare & Write: Not Supported 00:19:49.401 Scatter-Gather List 00:19:49.401 SGL Command Set: Supported 00:19:49.401 SGL Keyed: Not Supported 00:19:49.401 SGL Bit Bucket Descriptor: Not Supported 00:19:49.401 SGL Metadata Pointer: Not Supported 00:19:49.401 Oversized SGL: Not Supported 00:19:49.401 SGL Metadata Address: Not Supported 00:19:49.401 SGL Offset: Supported 00:19:49.401 Transport SGL Data Block: Not Supported 00:19:49.401 Replay Protected Memory Block: Not Supported 00:19:49.401 00:19:49.401 Firmware Slot Information 00:19:49.401 ========================= 00:19:49.401 Active slot: 0 00:19:49.401 00:19:49.401 00:19:49.401 Error Log 00:19:49.401 ========= 00:19:49.401 00:19:49.401 Active Namespaces 00:19:49.401 ================= 00:19:49.401 Discovery Log Page 00:19:49.401 ================== 00:19:49.401 Generation Counter: 2 00:19:49.401 Number of Records: 2 00:19:49.401 Record Format: 0 00:19:49.401 00:19:49.401 Discovery Log Entry 0 00:19:49.401 ---------------------- 00:19:49.401 Transport Type: 3 (TCP) 00:19:49.401 Address Family: 1 (IPv4) 00:19:49.401 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:49.401 Entry Flags: 00:19:49.401 Duplicate Returned Information: 0 00:19:49.401 Explicit Persistent Connection Support for Discovery: 0 00:19:49.401 Transport Requirements: 00:19:49.401 Secure Channel: Not Specified 00:19:49.401 Port ID: 1 (0x0001) 00:19:49.401 Controller ID: 65535 (0xffff) 00:19:49.401 Admin Max SQ Size: 32 00:19:49.401 Transport Service Identifier: 4420 00:19:49.401 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:49.401 Transport Address: 10.0.0.1 00:19:49.401 Discovery Log Entry 1 00:19:49.401 ---------------------- 00:19:49.401 Transport Type: 3 (TCP) 00:19:49.401 Address Family: 1 (IPv4) 00:19:49.401 Subsystem Type: 2 (NVM Subsystem) 00:19:49.401 Entry Flags: 00:19:49.401 Duplicate Returned Information: 0 00:19:49.401 Explicit Persistent Connection Support for Discovery: 0 00:19:49.401 Transport Requirements: 00:19:49.401 Secure Channel: Not Specified 00:19:49.401 Port ID: 1 (0x0001) 00:19:49.401 Controller ID: 65535 (0xffff) 00:19:49.401 Admin Max SQ Size: 32 00:19:49.401 Transport Service Identifier: 4420 00:19:49.401 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:49.401 Transport Address: 10.0.0.1 00:19:49.401 17:20:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:49.401 get_feature(0x01) failed 00:19:49.401 get_feature(0x02) failed 00:19:49.401 get_feature(0x04) failed 00:19:49.401 ===================================================== 00:19:49.401 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:49.401 ===================================================== 00:19:49.401 Controller Capabilities/Features 00:19:49.401 ================================ 00:19:49.401 Vendor ID: 0000 00:19:49.401 Subsystem Vendor ID: 0000 00:19:49.401 Serial Number: 0f630a61f2bc6e1bb3ba 00:19:49.402 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:49.402 Firmware Version: 6.8.9-20 00:19:49.402 Recommended Arb Burst: 6 00:19:49.402 IEEE OUI Identifier: 00 00 00 00:19:49.402 Multi-path I/O 00:19:49.402 May have multiple subsystem ports: Yes 00:19:49.402 May have multiple controllers: Yes 00:19:49.402 Associated with SR-IOV VF: No 00:19:49.402 Max Data Transfer Size: Unlimited 00:19:49.402 Max Number of Namespaces: 1024 00:19:49.402 Max Number of I/O Queues: 128 00:19:49.402 NVMe Specification Version (VS): 1.3 00:19:49.402 NVMe Specification Version (Identify): 1.3 00:19:49.402 Maximum Queue Entries: 1024 00:19:49.402 Contiguous Queues Required: No 00:19:49.402 Arbitration Mechanisms Supported 00:19:49.402 Weighted Round Robin: Not Supported 00:19:49.402 Vendor Specific: Not Supported 00:19:49.402 Reset Timeout: 7500 ms 00:19:49.402 Doorbell Stride: 4 bytes 00:19:49.402 NVM Subsystem Reset: Not Supported 00:19:49.402 Command Sets Supported 00:19:49.402 NVM Command Set: Supported 00:19:49.402 Boot Partition: Not Supported 00:19:49.402 Memory Page Size Minimum: 4096 bytes 00:19:49.402 Memory Page Size Maximum: 4096 bytes 00:19:49.402 Persistent Memory Region: Not Supported 00:19:49.402 Optional Asynchronous Events Supported 00:19:49.402 Namespace Attribute Notices: Supported 00:19:49.402 Firmware Activation Notices: Not Supported 00:19:49.402 ANA Change Notices: Supported 00:19:49.402 PLE Aggregate Log Change Notices: Not Supported 00:19:49.402 LBA Status Info Alert Notices: Not Supported 00:19:49.402 EGE Aggregate Log Change Notices: Not Supported 00:19:49.402 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.402 Zone Descriptor Change Notices: Not Supported 00:19:49.402 Discovery Log Change Notices: Not Supported 00:19:49.402 Controller Attributes 00:19:49.402 128-bit Host Identifier: Supported 00:19:49.402 Non-Operational Permissive Mode: Not Supported 00:19:49.402 NVM Sets: Not Supported 00:19:49.402 Read Recovery Levels: Not Supported 00:19:49.402 Endurance Groups: Not Supported 00:19:49.402 Predictable Latency Mode: Not Supported 00:19:49.402 Traffic Based Keep ALive: Supported 00:19:49.402 Namespace Granularity: Not Supported 00:19:49.402 SQ Associations: Not Supported 00:19:49.402 UUID List: Not Supported 00:19:49.402 Multi-Domain Subsystem: Not Supported 00:19:49.402 Fixed Capacity Management: Not Supported 00:19:49.402 Variable Capacity Management: Not Supported 00:19:49.402 Delete Endurance Group: Not Supported 00:19:49.402 Delete NVM Set: Not Supported 00:19:49.402 Extended LBA Formats Supported: Not Supported 00:19:49.402 Flexible Data Placement Supported: Not Supported 00:19:49.402 00:19:49.402 Controller Memory Buffer Support 00:19:49.402 ================================ 00:19:49.402 Supported: No 00:19:49.402 00:19:49.402 Persistent Memory Region Support 00:19:49.402 ================================ 00:19:49.402 Supported: No 00:19:49.402 00:19:49.402 Admin Command Set Attributes 00:19:49.402 ============================ 00:19:49.402 Security Send/Receive: Not Supported 00:19:49.402 Format NVM: Not Supported 00:19:49.402 Firmware Activate/Download: Not Supported 00:19:49.402 Namespace Management: Not Supported 00:19:49.402 Device Self-Test: Not Supported 00:19:49.402 Directives: Not Supported 00:19:49.402 NVMe-MI: Not Supported 00:19:49.402 Virtualization Management: Not Supported 00:19:49.402 Doorbell Buffer Config: Not Supported 00:19:49.402 Get LBA Status Capability: Not Supported 00:19:49.402 Command & Feature Lockdown Capability: Not Supported 00:19:49.402 Abort Command Limit: 4 00:19:49.402 Async Event Request Limit: 4 00:19:49.402 Number of Firmware Slots: N/A 00:19:49.402 Firmware Slot 1 Read-Only: N/A 00:19:49.402 Firmware Activation Without Reset: N/A 00:19:49.402 Multiple Update Detection Support: N/A 00:19:49.402 Firmware Update Granularity: No Information Provided 00:19:49.402 Per-Namespace SMART Log: Yes 00:19:49.402 Asymmetric Namespace Access Log Page: Supported 00:19:49.402 ANA Transition Time : 10 sec 00:19:49.402 00:19:49.402 Asymmetric Namespace Access Capabilities 00:19:49.402 ANA Optimized State : Supported 00:19:49.402 ANA Non-Optimized State : Supported 00:19:49.402 ANA Inaccessible State : Supported 00:19:49.402 ANA Persistent Loss State : Supported 00:19:49.402 ANA Change State : Supported 00:19:49.402 ANAGRPID is not changed : No 00:19:49.402 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:49.402 00:19:49.402 ANA Group Identifier Maximum : 128 00:19:49.402 Number of ANA Group Identifiers : 128 00:19:49.402 Max Number of Allowed Namespaces : 1024 00:19:49.402 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:49.402 Command Effects Log Page: Supported 00:19:49.402 Get Log Page Extended Data: Supported 00:19:49.402 Telemetry Log Pages: Not Supported 00:19:49.402 Persistent Event Log Pages: Not Supported 00:19:49.402 Supported Log Pages Log Page: May Support 00:19:49.402 Commands Supported & Effects Log Page: Not Supported 00:19:49.402 Feature Identifiers & Effects Log Page:May Support 00:19:49.402 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.402 Data Area 4 for Telemetry Log: Not Supported 00:19:49.402 Error Log Page Entries Supported: 128 00:19:49.402 Keep Alive: Supported 00:19:49.402 Keep Alive Granularity: 1000 ms 00:19:49.402 00:19:49.402 NVM Command Set Attributes 00:19:49.402 ========================== 00:19:49.402 Submission Queue Entry Size 00:19:49.402 Max: 64 00:19:49.402 Min: 64 00:19:49.402 Completion Queue Entry Size 00:19:49.402 Max: 16 00:19:49.402 Min: 16 00:19:49.402 Number of Namespaces: 1024 00:19:49.402 Compare Command: Not Supported 00:19:49.402 Write Uncorrectable Command: Not Supported 00:19:49.402 Dataset Management Command: Supported 00:19:49.402 Write Zeroes Command: Supported 00:19:49.402 Set Features Save Field: Not Supported 00:19:49.402 Reservations: Not Supported 00:19:49.402 Timestamp: Not Supported 00:19:49.402 Copy: Not Supported 00:19:49.402 Volatile Write Cache: Present 00:19:49.402 Atomic Write Unit (Normal): 1 00:19:49.402 Atomic Write Unit (PFail): 1 00:19:49.402 Atomic Compare & Write Unit: 1 00:19:49.402 Fused Compare & Write: Not Supported 00:19:49.402 Scatter-Gather List 00:19:49.402 SGL Command Set: Supported 00:19:49.402 SGL Keyed: Not Supported 00:19:49.402 SGL Bit Bucket Descriptor: Not Supported 00:19:49.402 SGL Metadata Pointer: Not Supported 00:19:49.402 Oversized SGL: Not Supported 00:19:49.402 SGL Metadata Address: Not Supported 00:19:49.402 SGL Offset: Supported 00:19:49.402 Transport SGL Data Block: Not Supported 00:19:49.402 Replay Protected Memory Block: Not Supported 00:19:49.402 00:19:49.402 Firmware Slot Information 00:19:49.402 ========================= 00:19:49.402 Active slot: 0 00:19:49.402 00:19:49.402 Asymmetric Namespace Access 00:19:49.402 =========================== 00:19:49.402 Change Count : 0 00:19:49.402 Number of ANA Group Descriptors : 1 00:19:49.402 ANA Group Descriptor : 0 00:19:49.402 ANA Group ID : 1 00:19:49.402 Number of NSID Values : 1 00:19:49.402 Change Count : 0 00:19:49.402 ANA State : 1 00:19:49.402 Namespace Identifier : 1 00:19:49.402 00:19:49.402 Commands Supported and Effects 00:19:49.402 ============================== 00:19:49.402 Admin Commands 00:19:49.402 -------------- 00:19:49.402 Get Log Page (02h): Supported 00:19:49.402 Identify (06h): Supported 00:19:49.402 Abort (08h): Supported 00:19:49.402 Set Features (09h): Supported 00:19:49.402 Get Features (0Ah): Supported 00:19:49.402 Asynchronous Event Request (0Ch): Supported 00:19:49.402 Keep Alive (18h): Supported 00:19:49.402 I/O Commands 00:19:49.402 ------------ 00:19:49.402 Flush (00h): Supported 00:19:49.402 Write (01h): Supported LBA-Change 00:19:49.402 Read (02h): Supported 00:19:49.402 Write Zeroes (08h): Supported LBA-Change 00:19:49.402 Dataset Management (09h): Supported 00:19:49.402 00:19:49.402 Error Log 00:19:49.402 ========= 00:19:49.402 Entry: 0 00:19:49.402 Error Count: 0x3 00:19:49.402 Submission Queue Id: 0x0 00:19:49.402 Command Id: 0x5 00:19:49.402 Phase Bit: 0 00:19:49.402 Status Code: 0x2 00:19:49.402 Status Code Type: 0x0 00:19:49.402 Do Not Retry: 1 00:19:49.402 Error Location: 0x28 00:19:49.402 LBA: 0x0 00:19:49.402 Namespace: 0x0 00:19:49.402 Vendor Log Page: 0x0 00:19:49.402 ----------- 00:19:49.402 Entry: 1 00:19:49.402 Error Count: 0x2 00:19:49.402 Submission Queue Id: 0x0 00:19:49.402 Command Id: 0x5 00:19:49.402 Phase Bit: 0 00:19:49.402 Status Code: 0x2 00:19:49.402 Status Code Type: 0x0 00:19:49.402 Do Not Retry: 1 00:19:49.402 Error Location: 0x28 00:19:49.402 LBA: 0x0 00:19:49.402 Namespace: 0x0 00:19:49.402 Vendor Log Page: 0x0 00:19:49.402 ----------- 00:19:49.402 Entry: 2 00:19:49.402 Error Count: 0x1 00:19:49.402 Submission Queue Id: 0x0 00:19:49.403 Command Id: 0x4 00:19:49.403 Phase Bit: 0 00:19:49.403 Status Code: 0x2 00:19:49.403 Status Code Type: 0x0 00:19:49.403 Do Not Retry: 1 00:19:49.403 Error Location: 0x28 00:19:49.403 LBA: 0x0 00:19:49.403 Namespace: 0x0 00:19:49.403 Vendor Log Page: 0x0 00:19:49.403 00:19:49.403 Number of Queues 00:19:49.403 ================ 00:19:49.403 Number of I/O Submission Queues: 128 00:19:49.403 Number of I/O Completion Queues: 128 00:19:49.403 00:19:49.403 ZNS Specific Controller Data 00:19:49.403 ============================ 00:19:49.403 Zone Append Size Limit: 0 00:19:49.403 00:19:49.403 00:19:49.403 Active Namespaces 00:19:49.403 ================= 00:19:49.403 get_feature(0x05) failed 00:19:49.403 Namespace ID:1 00:19:49.403 Command Set Identifier: NVM (00h) 00:19:49.403 Deallocate: Supported 00:19:49.403 Deallocated/Unwritten Error: Not Supported 00:19:49.403 Deallocated Read Value: Unknown 00:19:49.403 Deallocate in Write Zeroes: Not Supported 00:19:49.403 Deallocated Guard Field: 0xFFFF 00:19:49.403 Flush: Supported 00:19:49.403 Reservation: Not Supported 00:19:49.403 Namespace Sharing Capabilities: Multiple Controllers 00:19:49.403 Size (in LBAs): 1310720 (5GiB) 00:19:49.403 Capacity (in LBAs): 1310720 (5GiB) 00:19:49.403 Utilization (in LBAs): 1310720 (5GiB) 00:19:49.403 UUID: 5006dcd6-824b-4d37-95dd-dc0e3bb16bb7 00:19:49.403 Thin Provisioning: Not Supported 00:19:49.403 Per-NS Atomic Units: Yes 00:19:49.403 Atomic Boundary Size (Normal): 0 00:19:49.403 Atomic Boundary Size (PFail): 0 00:19:49.403 Atomic Boundary Offset: 0 00:19:49.403 NGUID/EUI64 Never Reused: No 00:19:49.403 ANA group ID: 1 00:19:49.403 Namespace Write Protected: No 00:19:49.403 Number of LBA Formats: 1 00:19:49.403 Current LBA Format: LBA Format #00 00:19:49.403 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:49.403 00:19:49.403 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:49.403 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.403 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.662 rmmod nvme_tcp 00:19:49.662 rmmod nvme_fabrics 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:49.662 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:49.920 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.920 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.920 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:49.920 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.920 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.920 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.920 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:49.921 17:20:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:50.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:50.882 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:50.882 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:50.882 00:19:50.882 real 0m3.493s 00:19:50.882 user 0m1.266s 00:19:50.882 sys 0m1.541s 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.882 ************************************ 00:19:50.882 END TEST nvmf_identify_kernel_target 00:19:50.882 ************************************ 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.882 ************************************ 00:19:50.882 START TEST nvmf_auth_host 00:19:50.882 ************************************ 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:50.882 * Looking for test storage... 00:19:50.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:19:50.882 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:51.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.142 --rc genhtml_branch_coverage=1 00:19:51.142 --rc genhtml_function_coverage=1 00:19:51.142 --rc genhtml_legend=1 00:19:51.142 --rc geninfo_all_blocks=1 00:19:51.142 --rc geninfo_unexecuted_blocks=1 00:19:51.142 00:19:51.142 ' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:51.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.142 --rc genhtml_branch_coverage=1 00:19:51.142 --rc genhtml_function_coverage=1 00:19:51.142 --rc genhtml_legend=1 00:19:51.142 --rc geninfo_all_blocks=1 00:19:51.142 --rc geninfo_unexecuted_blocks=1 00:19:51.142 00:19:51.142 ' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:51.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.142 --rc genhtml_branch_coverage=1 00:19:51.142 --rc genhtml_function_coverage=1 00:19:51.142 --rc genhtml_legend=1 00:19:51.142 --rc geninfo_all_blocks=1 00:19:51.142 --rc geninfo_unexecuted_blocks=1 00:19:51.142 00:19:51.142 ' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:51.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.142 --rc genhtml_branch_coverage=1 00:19:51.142 --rc genhtml_function_coverage=1 00:19:51.142 --rc genhtml_legend=1 00:19:51.142 --rc geninfo_all_blocks=1 00:19:51.142 --rc geninfo_unexecuted_blocks=1 00:19:51.142 00:19:51.142 ' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.142 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.143 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:51.143 Cannot find device "nvmf_init_br" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:51.143 Cannot find device "nvmf_init_br2" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:51.143 Cannot find device "nvmf_tgt_br" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.143 Cannot find device "nvmf_tgt_br2" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:51.143 Cannot find device "nvmf_init_br" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:51.143 Cannot find device "nvmf_init_br2" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:51.143 Cannot find device "nvmf_tgt_br" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:51.143 Cannot find device "nvmf_tgt_br2" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:51.143 Cannot find device "nvmf_br" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:51.143 Cannot find device "nvmf_init_if" 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:51.143 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:51.403 Cannot find device "nvmf_init_if2" 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.403 17:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.403 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:51.662 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:51.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:19:51.662 00:19:51.662 --- 10.0.0.3 ping statistics --- 00:19:51.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.662 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:51.662 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:51.662 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:51.662 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:19:51.662 00:19:51.662 --- 10.0.0.4 ping statistics --- 00:19:51.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.663 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:51.663 00:19:51.663 --- 10.0.0.1 ping statistics --- 00:19:51.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.663 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:51.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:51.663 00:19:51.663 --- 10.0.0.2 ping statistics --- 00:19:51.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.663 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78290 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78290 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78290 ']' 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.663 17:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=771f50aa02374c9f44fe25a4d8a0a625 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QAV 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 771f50aa02374c9f44fe25a4d8a0a625 0 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 771f50aa02374c9f44fe25a4d8a0a625 0 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=771f50aa02374c9f44fe25a4d8a0a625 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:52.600 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QAV 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QAV 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QAV 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=405b254a45372dbf918d16adef3bfc5549e2a9661955a780f4e22e6b72b98a28 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.31C 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 405b254a45372dbf918d16adef3bfc5549e2a9661955a780f4e22e6b72b98a28 3 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 405b254a45372dbf918d16adef3bfc5549e2a9661955a780f4e22e6b72b98a28 3 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=405b254a45372dbf918d16adef3bfc5549e2a9661955a780f4e22e6b72b98a28 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.31C 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.31C 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.31C 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1439577bcdc10b0056c10ab0dab64a7da455647bb2b2325d 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mfY 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1439577bcdc10b0056c10ab0dab64a7da455647bb2b2325d 0 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1439577bcdc10b0056c10ab0dab64a7da455647bb2b2325d 0 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1439577bcdc10b0056c10ab0dab64a7da455647bb2b2325d 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mfY 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mfY 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mfY 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:52.859 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=966d7bf979e259fe3b249a4f3d5754039428b52665e6314a 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.J5D 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 966d7bf979e259fe3b249a4f3d5754039428b52665e6314a 2 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 966d7bf979e259fe3b249a4f3d5754039428b52665e6314a 2 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=966d7bf979e259fe3b249a4f3d5754039428b52665e6314a 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.J5D 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.J5D 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.J5D 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=972082fb0a8a5c3ec669ca6910c7daff 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kzU 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 972082fb0a8a5c3ec669ca6910c7daff 1 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 972082fb0a8a5c3ec669ca6910c7daff 1 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=972082fb0a8a5c3ec669ca6910c7daff 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:52.860 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kzU 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kzU 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kzU 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cfda57e4a9c8ea970efdddb1125bd1cb 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UPc 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cfda57e4a9c8ea970efdddb1125bd1cb 1 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cfda57e4a9c8ea970efdddb1125bd1cb 1 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cfda57e4a9c8ea970efdddb1125bd1cb 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UPc 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UPc 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UPc 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=be30ba5e6b2549a130cae7283d05a24872ac3b4781624b8d 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9Bl 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key be30ba5e6b2549a130cae7283d05a24872ac3b4781624b8d 2 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 be30ba5e6b2549a130cae7283d05a24872ac3b4781624b8d 2 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=be30ba5e6b2549a130cae7283d05a24872ac3b4781624b8d 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9Bl 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9Bl 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9Bl 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=73c06c683e296376be1516906d18b4bb 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CWD 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 73c06c683e296376be1516906d18b4bb 0 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 73c06c683e296376be1516906d18b4bb 0 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=73c06c683e296376be1516906d18b4bb 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CWD 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CWD 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.CWD 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8e95a82b772ba175ac64620ec9e7142ca1422a3f7bc151d6cb7484e15a38cbfd 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Q3i 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8e95a82b772ba175ac64620ec9e7142ca1422a3f7bc151d6cb7484e15a38cbfd 3 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8e95a82b772ba175ac64620ec9e7142ca1422a3f7bc151d6cb7484e15a38cbfd 3 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8e95a82b772ba175ac64620ec9e7142ca1422a3f7bc151d6cb7484e15a38cbfd 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:53.119 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:53.378 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Q3i 00:19:53.378 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Q3i 00:19:53.378 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Q3i 00:19:53.378 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:53.378 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78290 00:19:53.378 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78290 ']' 00:19:53.379 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.379 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.379 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.379 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.379 17:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QAV 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.31C ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.31C 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mfY 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.J5D ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J5D 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kzU 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UPc ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UPc 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9Bl 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.CWD ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.CWD 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Q3i 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:53.638 17:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:53.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:54.156 Waiting for block devices as requested 00:19:54.156 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:54.156 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:54.729 No valid GPT data, bailing 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:54.729 No valid GPT data, bailing 00:19:54.729 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:55.001 No valid GPT data, bailing 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:55.001 No valid GPT data, bailing 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -a 10.0.0.1 -t tcp -s 4420 00:19:55.001 00:19:55.001 Discovery Log Number of Records 2, Generation counter 2 00:19:55.001 =====Discovery Log Entry 0====== 00:19:55.001 trtype: tcp 00:19:55.001 adrfam: ipv4 00:19:55.001 subtype: current discovery subsystem 00:19:55.001 treq: not specified, sq flow control disable supported 00:19:55.001 portid: 1 00:19:55.001 trsvcid: 4420 00:19:55.001 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:55.001 traddr: 10.0.0.1 00:19:55.001 eflags: none 00:19:55.001 sectype: none 00:19:55.001 =====Discovery Log Entry 1====== 00:19:55.001 trtype: tcp 00:19:55.001 adrfam: ipv4 00:19:55.001 subtype: nvme subsystem 00:19:55.001 treq: not specified, sq flow control disable supported 00:19:55.001 portid: 1 00:19:55.001 trsvcid: 4420 00:19:55.001 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:55.001 traddr: 10.0.0.1 00:19:55.001 eflags: none 00:19:55.001 sectype: none 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.001 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.002 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.002 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:55.002 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:55.002 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.002 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.262 nvme0n1 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.262 17:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:55.262 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.263 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.522 nvme0n1 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.522 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.782 nvme0n1 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.782 nvme0n1 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.782 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.783 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.042 nvme0n1 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.042 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.043 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.301 nvme0n1 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.301 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.302 17:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.561 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.820 nvme0n1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.820 nvme0n1 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.820 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.079 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.080 nvme0n1 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.080 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.339 17:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.339 nvme0n1 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.339 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.340 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.599 nvme0n1 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:57.599 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.167 17:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.427 nvme0n1 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.427 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.686 nvme0n1 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.686 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.687 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.945 nvme0n1 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.945 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.204 nvme0n1 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.204 17:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.505 nvme0n1 00:19:59.505 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.506 17:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.408 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.409 17:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.409 nvme0n1 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.409 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.976 nvme0n1 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.976 17:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.542 nvme0n1 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.542 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.543 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.802 nvme0n1 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.802 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.803 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.803 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.803 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.371 nvme0n1 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.371 17:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.946 nvme0n1 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.946 17:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.515 nvme0n1 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.515 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.084 nvme0n1 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.084 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.344 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.344 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.344 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.344 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:05.344 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.344 17:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.912 nvme0n1 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.912 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.913 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.913 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.913 17:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.481 nvme0n1 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.481 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.482 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.741 nvme0n1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.741 nvme0n1 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.741 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.024 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.025 nvme0n1 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.025 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.301 nvme0n1 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.301 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.302 17:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.302 nvme0n1 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.302 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.561 nvme0n1 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:07.561 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.562 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.821 nvme0n1 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.821 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.080 nvme0n1 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.080 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.081 nvme0n1 00:20:08.081 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.340 17:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 nvme0n1 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.340 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.599 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.600 nvme0n1 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.600 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.859 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.860 nvme0n1 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.860 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.119 nvme0n1 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.119 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.378 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.379 17:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.379 nvme0n1 00:20:09.379 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.379 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.379 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.379 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.379 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.379 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.638 nvme0n1 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.638 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.639 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.898 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.157 nvme0n1 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.157 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.158 17:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.725 nvme0n1 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.725 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.984 nvme0n1 00:20:10.984 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.985 17:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.580 nvme0n1 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.580 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.843 nvme0n1 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.843 17:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.410 nvme0n1 00:20:12.410 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.410 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.410 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.410 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.410 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.410 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:12.669 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.670 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.239 nvme0n1 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.239 17:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.807 nvme0n1 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.807 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.808 17:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.375 nvme0n1 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.375 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.376 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.943 nvme0n1 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:14.943 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.944 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.203 nvme0n1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.203 17:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 nvme0n1 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 nvme0n1 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:15.722 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 nvme0n1 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.723 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 nvme0n1 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:15.982 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.983 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.241 nvme0n1 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.241 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.242 17:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.242 nvme0n1 00:20:16.242 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.242 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.242 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.242 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.242 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.242 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.500 nvme0n1 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.500 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 nvme0n1 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.758 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:16.759 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.759 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:16.759 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:16.759 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:16.759 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:16.759 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.759 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.017 nvme0n1 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.017 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 nvme0n1 00:20:17.275 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.275 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.275 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.275 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.275 17:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.275 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.276 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.534 nvme0n1 00:20:17.534 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.534 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.534 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.534 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.534 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.534 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.792 nvme0n1 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.792 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:18.050 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.051 nvme0n1 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.051 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:18.318 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.319 17:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.319 nvme0n1 00:20:18.319 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.319 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.319 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.319 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.319 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.319 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.600 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.601 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.860 nvme0n1 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.860 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.118 nvme0n1 00:20:19.118 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.118 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.118 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.118 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.118 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.118 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.377 17:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.637 nvme0n1 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.637 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.205 nvme0n1 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.205 17:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.464 nvme0n1 00:20:20.464 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.464 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.464 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.464 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.464 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.464 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzcxZjUwYWEwMjM3NGM5ZjQ0ZmUyNWE0ZDhhMGE2MjU5fXCt: 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDA1YjI1NGE0NTM3MmRiZjkxOGQxNmFkZWYzYmZjNTU0OWUyYTk2NjE5NTVhNzgwZjRlMjJlNmI3MmI5OGEyOIPJluU=: 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.723 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.291 nvme0n1 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:21.291 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.292 17:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.859 nvme0n1 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:21.860 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.119 17:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.687 nvme0n1 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmUzMGJhNWU2YjI1NDlhMTMwY2FlNzI4M2QwNWEyNDg3MmFjM2I0NzgxNjI0YjhkpuaTTQ==: 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNjMDZjNjgzZTI5NjM3NmJlMTUxNjkwNmQxOGI0YmJE14kw: 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.687 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.256 nvme0n1 00:20:23.256 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.256 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.256 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.256 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.256 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.256 17:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU5NWE4MmI3NzJiYTE3NWFjNjQ2MjBlYzllNzE0MmNhMTQyMmEzZjdiYzE1MWQ2Y2I3NDg0ZTE1YTM4Y2JmZBfKjkc=: 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.256 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.515 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.084 nvme0n1 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.084 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.084 request: 00:20:24.084 { 00:20:24.084 "name": "nvme0", 00:20:24.084 "trtype": "tcp", 00:20:24.084 "traddr": "10.0.0.1", 00:20:24.084 "adrfam": "ipv4", 00:20:24.085 "trsvcid": "4420", 00:20:24.085 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:24.085 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:24.085 "prchk_reftag": false, 00:20:24.085 "prchk_guard": false, 00:20:24.085 "hdgst": false, 00:20:24.085 "ddgst": false, 00:20:24.085 "allow_unrecognized_csi": false, 00:20:24.085 "method": "bdev_nvme_attach_controller", 00:20:24.085 "req_id": 1 00:20:24.085 } 00:20:24.085 Got JSON-RPC error response 00:20:24.085 response: 00:20:24.085 { 00:20:24.085 "code": -5, 00:20:24.085 "message": "Input/output error" 00:20:24.085 } 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.085 request: 00:20:24.085 { 00:20:24.085 "name": "nvme0", 00:20:24.085 "trtype": "tcp", 00:20:24.085 "traddr": "10.0.0.1", 00:20:24.085 "adrfam": "ipv4", 00:20:24.085 "trsvcid": "4420", 00:20:24.085 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:24.085 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:24.085 "prchk_reftag": false, 00:20:24.085 "prchk_guard": false, 00:20:24.085 "hdgst": false, 00:20:24.085 "ddgst": false, 00:20:24.085 "dhchap_key": "key2", 00:20:24.085 "allow_unrecognized_csi": false, 00:20:24.085 "method": "bdev_nvme_attach_controller", 00:20:24.085 "req_id": 1 00:20:24.085 } 00:20:24.085 Got JSON-RPC error response 00:20:24.085 response: 00:20:24.085 { 00:20:24.085 "code": -5, 00:20:24.085 "message": "Input/output error" 00:20:24.085 } 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.085 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.344 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.345 request: 00:20:24.345 { 00:20:24.345 "name": "nvme0", 00:20:24.345 "trtype": "tcp", 00:20:24.345 "traddr": "10.0.0.1", 00:20:24.345 "adrfam": "ipv4", 00:20:24.345 "trsvcid": "4420", 00:20:24.345 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:24.345 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:24.345 "prchk_reftag": false, 00:20:24.345 "prchk_guard": false, 00:20:24.345 "hdgst": false, 00:20:24.345 "ddgst": false, 00:20:24.345 "dhchap_key": "key1", 00:20:24.345 "dhchap_ctrlr_key": "ckey2", 00:20:24.345 "allow_unrecognized_csi": false, 00:20:24.345 "method": "bdev_nvme_attach_controller", 00:20:24.345 "req_id": 1 00:20:24.345 } 00:20:24.345 Got JSON-RPC error response 00:20:24.345 response: 00:20:24.345 { 00:20:24.345 "code": -5, 00:20:24.345 "message": "Input/output error" 00:20:24.345 } 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.345 17:21:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.345 nvme0n1 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.345 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.605 request: 00:20:24.605 { 00:20:24.605 "name": "nvme0", 00:20:24.605 "dhchap_key": "key1", 00:20:24.605 "dhchap_ctrlr_key": "ckey2", 00:20:24.605 "method": "bdev_nvme_set_keys", 00:20:24.605 "req_id": 1 00:20:24.605 } 00:20:24.605 Got JSON-RPC error response 00:20:24.605 response: 00:20:24.605 { 00:20:24.605 "code": -13, 00:20:24.605 "message": "Permission denied" 00:20:24.605 } 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:24.605 17:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTQzOTU3N2JjZGMxMGIwMDU2YzEwYWIwZGFiNjRhN2RhNDU1NjQ3YmIyYjIzMjVk2wN3YA==: 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: ]] 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2ZDdiZjk3OWUyNTlmZTNiMjQ5YTRmM2Q1NzU0MDM5NDI4YjUyNjY1ZTYzMTRhpLNmxg==: 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.542 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.801 nvme0n1 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTcyMDgyZmIwYThhNWMzZWM2NjljYTY5MTBjN2RhZmYTSDXI: 00:20:25.801 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: ]] 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZkYTU3ZTRhOWM4ZWE5NzBlZmRkZGIxMTI1YmQxY2IkG8s8: 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.802 request: 00:20:25.802 { 00:20:25.802 "name": "nvme0", 00:20:25.802 "dhchap_key": "key2", 00:20:25.802 "dhchap_ctrlr_key": "ckey1", 00:20:25.802 "method": "bdev_nvme_set_keys", 00:20:25.802 "req_id": 1 00:20:25.802 } 00:20:25.802 Got JSON-RPC error response 00:20:25.802 response: 00:20:25.802 { 00:20:25.802 "code": -13, 00:20:25.802 "message": "Permission denied" 00:20:25.802 } 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:25.802 17:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.738 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.738 rmmod nvme_tcp 00:20:26.997 rmmod nvme_fabrics 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78290 ']' 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78290 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 78290 ']' 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 78290 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78290 00:20:26.997 killing process with pid 78290 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78290' 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 78290 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 78290 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:26.997 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:27.256 17:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.256 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.256 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:27.256 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.256 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.256 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.514 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:27.514 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:27.514 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:27.515 17:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:28.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.342 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.342 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.342 17:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QAV /tmp/spdk.key-null.mfY /tmp/spdk.key-sha256.kzU /tmp/spdk.key-sha384.9Bl /tmp/spdk.key-sha512.Q3i /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:28.342 17:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:28.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.723 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:28.723 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:28.723 00:20:28.723 real 0m37.828s 00:20:28.723 user 0m34.474s 00:20:28.723 sys 0m3.998s 00:20:28.723 17:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:28.723 17:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.723 ************************************ 00:20:28.723 END TEST nvmf_auth_host 00:20:28.723 ************************************ 00:20:28.723 17:21:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:28.723 17:21:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:28.723 17:21:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:28.723 17:21:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:28.723 17:21:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.724 ************************************ 00:20:28.724 START TEST nvmf_digest 00:20:28.724 ************************************ 00:20:28.724 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:28.984 * Looking for test storage... 00:20:28.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:28.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.984 --rc genhtml_branch_coverage=1 00:20:28.984 --rc genhtml_function_coverage=1 00:20:28.984 --rc genhtml_legend=1 00:20:28.984 --rc geninfo_all_blocks=1 00:20:28.984 --rc geninfo_unexecuted_blocks=1 00:20:28.984 00:20:28.984 ' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:28.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.984 --rc genhtml_branch_coverage=1 00:20:28.984 --rc genhtml_function_coverage=1 00:20:28.984 --rc genhtml_legend=1 00:20:28.984 --rc geninfo_all_blocks=1 00:20:28.984 --rc geninfo_unexecuted_blocks=1 00:20:28.984 00:20:28.984 ' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:28.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.984 --rc genhtml_branch_coverage=1 00:20:28.984 --rc genhtml_function_coverage=1 00:20:28.984 --rc genhtml_legend=1 00:20:28.984 --rc geninfo_all_blocks=1 00:20:28.984 --rc geninfo_unexecuted_blocks=1 00:20:28.984 00:20:28.984 ' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:28.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.984 --rc genhtml_branch_coverage=1 00:20:28.984 --rc genhtml_function_coverage=1 00:20:28.984 --rc genhtml_legend=1 00:20:28.984 --rc geninfo_all_blocks=1 00:20:28.984 --rc geninfo_unexecuted_blocks=1 00:20:28.984 00:20:28.984 ' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.984 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.985 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:28.985 Cannot find device "nvmf_init_br" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:28.985 Cannot find device "nvmf_init_br2" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:28.985 Cannot find device "nvmf_tgt_br" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.985 Cannot find device "nvmf_tgt_br2" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:28.985 Cannot find device "nvmf_init_br" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:28.985 Cannot find device "nvmf_init_br2" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:28.985 Cannot find device "nvmf_tgt_br" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:28.985 Cannot find device "nvmf_tgt_br2" 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:28.985 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:29.244 Cannot find device "nvmf_br" 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:29.244 Cannot find device "nvmf_init_if" 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:29.244 Cannot find device "nvmf_init_if2" 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.244 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:29.244 17:21:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.244 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:29.244 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:29.504 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.504 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:20:29.504 00:20:29.504 --- 10.0.0.3 ping statistics --- 00:20:29.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.504 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:29.504 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:29.504 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:20:29.504 00:20:29.504 --- 10.0.0.4 ping statistics --- 00:20:29.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.504 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:29.504 00:20:29.504 --- 10.0.0.1 ping statistics --- 00:20:29.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.504 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:29.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:20:29.504 00:20:29.504 --- 10.0.0.2 ping statistics --- 00:20:29.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.504 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:29.504 ************************************ 00:20:29.504 START TEST nvmf_digest_clean 00:20:29.504 ************************************ 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79943 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79943 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79943 ']' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:29.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:29.504 17:21:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.504 [2024-11-04 17:21:30.213127] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:29.504 [2024-11-04 17:21:30.214036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.763 [2024-11-04 17:21:30.372821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.763 [2024-11-04 17:21:30.428537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.763 [2024-11-04 17:21:30.428595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.763 [2024-11-04 17:21:30.428620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.763 [2024-11-04 17:21:30.428631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.763 [2024-11-04 17:21:30.428641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.763 [2024-11-04 17:21:30.429068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.700 [2024-11-04 17:21:31.291969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.700 null0 00:20:30.700 [2024-11-04 17:21:31.345962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.700 [2024-11-04 17:21:31.370064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79975 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79975 /var/tmp/bperf.sock 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79975 ']' 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.700 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.700 [2024-11-04 17:21:31.434685] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:30.700 [2024-11-04 17:21:31.434982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79975 ] 00:20:30.959 [2024-11-04 17:21:31.589551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.959 [2024-11-04 17:21:31.646695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.959 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.959 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:20:30.959 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:30.959 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:30.959 17:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:31.527 [2024-11-04 17:21:32.033374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:31.527 17:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.527 17:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.786 nvme0n1 00:20:31.786 17:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:31.786 17:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:32.045 Running I/O for 2 seconds... 00:20:33.917 15494.00 IOPS, 60.52 MiB/s [2024-11-04T17:21:34.721Z] 15684.50 IOPS, 61.27 MiB/s 00:20:33.917 Latency(us) 00:20:33.917 [2024-11-04T17:21:34.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.917 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:33.917 nvme0n1 : 2.01 15704.88 61.35 0.00 0.00 8144.27 7000.44 19184.17 00:20:33.917 [2024-11-04T17:21:34.721Z] =================================================================================================================== 00:20:33.918 [2024-11-04T17:21:34.722Z] Total : 15704.88 61.35 0.00 0.00 8144.27 7000.44 19184.17 00:20:33.918 { 00:20:33.918 "results": [ 00:20:33.918 { 00:20:33.918 "job": "nvme0n1", 00:20:33.918 "core_mask": "0x2", 00:20:33.918 "workload": "randread", 00:20:33.918 "status": "finished", 00:20:33.918 "queue_depth": 128, 00:20:33.918 "io_size": 4096, 00:20:33.918 "runtime": 2.005555, 00:20:33.918 "iops": 15704.879696642574, 00:20:33.918 "mibps": 61.347186315010056, 00:20:33.918 "io_failed": 0, 00:20:33.918 "io_timeout": 0, 00:20:33.918 "avg_latency_us": 8144.271912303337, 00:20:33.918 "min_latency_us": 7000.436363636363, 00:20:33.918 "max_latency_us": 19184.174545454545 00:20:33.918 } 00:20:33.918 ], 00:20:33.918 "core_count": 1 00:20:33.918 } 00:20:33.918 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:33.918 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:33.918 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:33.918 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:33.918 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:33.918 | select(.opcode=="crc32c") 00:20:33.918 | "\(.module_name) \(.executed)"' 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79975 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79975 ']' 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79975 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79975 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:34.178 killing process with pid 79975 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79975' 00:20:34.178 Received shutdown signal, test time was about 2.000000 seconds 00:20:34.178 00:20:34.178 Latency(us) 00:20:34.178 [2024-11-04T17:21:34.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.178 [2024-11-04T17:21:34.982Z] =================================================================================================================== 00:20:34.178 [2024-11-04T17:21:34.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79975 00:20:34.178 17:21:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79975 00:20:34.436 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80029 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80029 /var/tmp/bperf.sock 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80029 ']' 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:34.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.437 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:34.437 [2024-11-04 17:21:35.202129] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:34.437 [2024-11-04 17:21:35.202435] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80029 ] 00:20:34.437 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:34.437 Zero copy mechanism will not be used. 00:20:34.695 [2024-11-04 17:21:35.346605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.695 [2024-11-04 17:21:35.401910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.695 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:34.695 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:20:34.695 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:34.695 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:34.695 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:34.954 [2024-11-04 17:21:35.749021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:35.212 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.212 17:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.471 nvme0n1 00:20:35.471 17:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:35.471 17:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:35.730 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:35.730 Zero copy mechanism will not be used. 00:20:35.730 Running I/O for 2 seconds... 00:20:37.602 7664.00 IOPS, 958.00 MiB/s [2024-11-04T17:21:38.406Z] 7536.00 IOPS, 942.00 MiB/s 00:20:37.602 Latency(us) 00:20:37.602 [2024-11-04T17:21:38.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.602 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:37.602 nvme0n1 : 2.00 7531.42 941.43 0.00 0.00 2121.03 1690.53 8698.41 00:20:37.602 [2024-11-04T17:21:38.406Z] =================================================================================================================== 00:20:37.602 [2024-11-04T17:21:38.406Z] Total : 7531.42 941.43 0.00 0.00 2121.03 1690.53 8698.41 00:20:37.602 { 00:20:37.602 "results": [ 00:20:37.602 { 00:20:37.602 "job": "nvme0n1", 00:20:37.602 "core_mask": "0x2", 00:20:37.602 "workload": "randread", 00:20:37.602 "status": "finished", 00:20:37.602 "queue_depth": 16, 00:20:37.602 "io_size": 131072, 00:20:37.602 "runtime": 2.003342, 00:20:37.602 "iops": 7531.415005525766, 00:20:37.602 "mibps": 941.4268756907207, 00:20:37.602 "io_failed": 0, 00:20:37.602 "io_timeout": 0, 00:20:37.602 "avg_latency_us": 2121.025770750988, 00:20:37.602 "min_latency_us": 1690.530909090909, 00:20:37.602 "max_latency_us": 8698.414545454545 00:20:37.602 } 00:20:37.602 ], 00:20:37.602 "core_count": 1 00:20:37.602 } 00:20:37.602 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:37.602 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:37.602 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:37.602 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:37.602 | select(.opcode=="crc32c") 00:20:37.602 | "\(.module_name) \(.executed)"' 00:20:37.602 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80029 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80029 ']' 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80029 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80029 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:38.181 killing process with pid 80029 00:20:38.181 Received shutdown signal, test time was about 2.000000 seconds 00:20:38.181 00:20:38.181 Latency(us) 00:20:38.181 [2024-11-04T17:21:38.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.181 [2024-11-04T17:21:38.985Z] =================================================================================================================== 00:20:38.181 [2024-11-04T17:21:38.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80029' 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80029 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80029 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80078 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80078 /var/tmp/bperf.sock 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80078 ']' 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:38.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:38.181 17:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:38.181 [2024-11-04 17:21:38.954921] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:38.181 [2024-11-04 17:21:38.955145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80078 ] 00:20:38.454 [2024-11-04 17:21:39.096704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.454 [2024-11-04 17:21:39.156883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.454 17:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:38.454 17:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:20:38.454 17:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:38.454 17:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:38.454 17:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:39.026 [2024-11-04 17:21:39.589626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:39.026 17:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.026 17:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.285 nvme0n1 00:20:39.285 17:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:39.285 17:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:39.543 Running I/O for 2 seconds... 00:20:41.416 15114.00 IOPS, 59.04 MiB/s [2024-11-04T17:21:42.220Z] 14986.50 IOPS, 58.54 MiB/s 00:20:41.416 Latency(us) 00:20:41.416 [2024-11-04T17:21:42.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.416 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:41.416 nvme0n1 : 2.01 14977.57 58.51 0.00 0.00 8539.37 6464.23 17158.52 00:20:41.416 [2024-11-04T17:21:42.220Z] =================================================================================================================== 00:20:41.416 [2024-11-04T17:21:42.220Z] Total : 14977.57 58.51 0.00 0.00 8539.37 6464.23 17158.52 00:20:41.416 { 00:20:41.416 "results": [ 00:20:41.416 { 00:20:41.416 "job": "nvme0n1", 00:20:41.416 "core_mask": "0x2", 00:20:41.416 "workload": "randwrite", 00:20:41.416 "status": "finished", 00:20:41.416 "queue_depth": 128, 00:20:41.416 "io_size": 4096, 00:20:41.416 "runtime": 2.009738, 00:20:41.416 "iops": 14977.574191262742, 00:20:41.416 "mibps": 58.506149184620085, 00:20:41.416 "io_failed": 0, 00:20:41.416 "io_timeout": 0, 00:20:41.416 "avg_latency_us": 8539.371320433329, 00:20:41.416 "min_latency_us": 6464.232727272727, 00:20:41.416 "max_latency_us": 17158.516363636365 00:20:41.416 } 00:20:41.416 ], 00:20:41.416 "core_count": 1 00:20:41.416 } 00:20:41.416 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:41.416 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:41.416 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:41.416 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:41.416 | select(.opcode=="crc32c") 00:20:41.416 | "\(.module_name) \(.executed)"' 00:20:41.416 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80078 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80078 ']' 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80078 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80078 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80078' 00:20:41.983 killing process with pid 80078 00:20:41.983 Received shutdown signal, test time was about 2.000000 seconds 00:20:41.983 00:20:41.983 Latency(us) 00:20:41.983 [2024-11-04T17:21:42.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.983 [2024-11-04T17:21:42.787Z] =================================================================================================================== 00:20:41.983 [2024-11-04T17:21:42.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80078 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80078 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:41.983 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80132 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80132 /var/tmp/bperf.sock 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 80132 ']' 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:41.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:41.984 17:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:42.242 [2024-11-04 17:21:42.820666] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:42.242 [2024-11-04 17:21:42.820959] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80132 ] 00:20:42.242 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:42.242 Zero copy mechanism will not be used. 00:20:42.242 [2024-11-04 17:21:42.966882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.242 [2024-11-04 17:21:43.023256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.179 17:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.179 17:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:20:43.179 17:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:43.179 17:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:43.179 17:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:43.438 [2024-11-04 17:21:44.104948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:43.438 17:21:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.438 17:21:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:44.015 nvme0n1 00:20:44.015 17:21:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:44.015 17:21:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:44.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:44.015 Zero copy mechanism will not be used. 00:20:44.015 Running I/O for 2 seconds... 00:20:45.913 6261.00 IOPS, 782.62 MiB/s [2024-11-04T17:21:46.717Z] 6320.00 IOPS, 790.00 MiB/s 00:20:45.913 Latency(us) 00:20:45.913 [2024-11-04T17:21:46.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.913 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:45.913 nvme0n1 : 2.00 6316.34 789.54 0.00 0.00 2526.84 1690.53 6702.55 00:20:45.913 [2024-11-04T17:21:46.717Z] =================================================================================================================== 00:20:45.913 [2024-11-04T17:21:46.717Z] Total : 6316.34 789.54 0.00 0.00 2526.84 1690.53 6702.55 00:20:45.913 { 00:20:45.913 "results": [ 00:20:45.913 { 00:20:45.913 "job": "nvme0n1", 00:20:45.913 "core_mask": "0x2", 00:20:45.913 "workload": "randwrite", 00:20:45.913 "status": "finished", 00:20:45.913 "queue_depth": 16, 00:20:45.913 "io_size": 131072, 00:20:45.913 "runtime": 2.004324, 00:20:45.913 "iops": 6316.344064133344, 00:20:45.913 "mibps": 789.543008016668, 00:20:45.913 "io_failed": 0, 00:20:45.913 "io_timeout": 0, 00:20:45.913 "avg_latency_us": 2526.838165158696, 00:20:45.913 "min_latency_us": 1690.530909090909, 00:20:45.913 "max_latency_us": 6702.545454545455 00:20:45.913 } 00:20:45.913 ], 00:20:45.913 "core_count": 1 00:20:45.913 } 00:20:45.913 17:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:45.913 17:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:46.171 17:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:46.171 17:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:46.171 | select(.opcode=="crc32c") 00:20:46.171 | "\(.module_name) \(.executed)"' 00:20:46.171 17:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80132 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 80132 ']' 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 80132 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80132 00:20:46.431 killing process with pid 80132 00:20:46.431 Received shutdown signal, test time was about 2.000000 seconds 00:20:46.431 00:20:46.431 Latency(us) 00:20:46.431 [2024-11-04T17:21:47.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.431 [2024-11-04T17:21:47.235Z] =================================================================================================================== 00:20:46.431 [2024-11-04T17:21:47.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80132' 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 80132 00:20:46.431 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 80132 00:20:46.690 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79943 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79943 ']' 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79943 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79943 00:20:46.691 killing process with pid 79943 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79943' 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79943 00:20:46.691 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79943 00:20:46.950 00:20:46.950 real 0m17.371s 00:20:46.950 user 0m33.934s 00:20:46.950 sys 0m4.601s 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:46.950 ************************************ 00:20:46.950 END TEST nvmf_digest_clean 00:20:46.950 ************************************ 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.950 ************************************ 00:20:46.950 START TEST nvmf_digest_error 00:20:46.950 ************************************ 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80221 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80221 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80221 ']' 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:46.950 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.950 [2024-11-04 17:21:47.628170] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:46.950 [2024-11-04 17:21:47.628274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.209 [2024-11-04 17:21:47.779198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.209 [2024-11-04 17:21:47.827777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.209 [2024-11-04 17:21:47.828054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.209 [2024-11-04 17:21:47.828254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.209 [2024-11-04 17:21:47.828372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.209 [2024-11-04 17:21:47.828410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.209 [2024-11-04 17:21:47.828844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.209 [2024-11-04 17:21:47.941373] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.209 17:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.209 [2024-11-04 17:21:48.003606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:47.468 null0 00:20:47.468 [2024-11-04 17:21:48.056313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.468 [2024-11-04 17:21:48.080485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80244 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80244 /var/tmp/bperf.sock 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80244 ']' 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:47.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:47.468 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.468 [2024-11-04 17:21:48.141729] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:47.468 [2024-11-04 17:21:48.141864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80244 ] 00:20:47.728 [2024-11-04 17:21:48.291454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.728 [2024-11-04 17:21:48.337603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.728 [2024-11-04 17:21:48.398975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:47.728 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:47.728 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:20:47.728 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.728 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.987 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.987 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.987 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.987 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.987 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.987 17:21:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:48.552 nvme0n1 00:20:48.552 17:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:48.552 17:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.552 17:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:48.552 17:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.552 17:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:48.553 17:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:48.553 Running I/O for 2 seconds... 00:20:48.553 [2024-11-04 17:21:49.219140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.219203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.219243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.553 [2024-11-04 17:21:49.235703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.235745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.235775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.553 [2024-11-04 17:21:49.253845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.253902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.253916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.553 [2024-11-04 17:21:49.271757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.271812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.271842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.553 [2024-11-04 17:21:49.288530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.288583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.288612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.553 [2024-11-04 17:21:49.305047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.305100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.305130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.553 [2024-11-04 17:21:49.321412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.321467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.321496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.553 [2024-11-04 17:21:49.338088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.553 [2024-11-04 17:21:49.338130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.553 [2024-11-04 17:21:49.338143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.812 [2024-11-04 17:21:49.355795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.355851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.355864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.372486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.372541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.372571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.389730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.389785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.389815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.407693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.407748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.407776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.424776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.424826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.424855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.441506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.441559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.441588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.458892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.458946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.458974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.474962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.475014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.475043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.490707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.490759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.490787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.505842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.505918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.505952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.520857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.520908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.520936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.535954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.536006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.536034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.550942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.550992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.551020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.565706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.565758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.565785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.580618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.580668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.580696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.595608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.595674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.595702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:48.813 [2024-11-04 17:21:49.610708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:48.813 [2024-11-04 17:21:49.610759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.813 [2024-11-04 17:21:49.610787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.073 [2024-11-04 17:21:49.626966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.073 [2024-11-04 17:21:49.627016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.073 [2024-11-04 17:21:49.627044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.073 [2024-11-04 17:21:49.641834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.073 [2024-11-04 17:21:49.641921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.073 [2024-11-04 17:21:49.641950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.073 [2024-11-04 17:21:49.657462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.073 [2024-11-04 17:21:49.657512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.073 [2024-11-04 17:21:49.657540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.073 [2024-11-04 17:21:49.672907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.073 [2024-11-04 17:21:49.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.073 [2024-11-04 17:21:49.672986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.073 [2024-11-04 17:21:49.687842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.073 [2024-11-04 17:21:49.687893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.687921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.702849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.702900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.702928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.717780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.717832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.717860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.732563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.732613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.732640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.747443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.747495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.747523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.762733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.762800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.762828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.778027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.778063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.778091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.792813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.792863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.792891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.807806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.807856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.807883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.822669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.822718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.822745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.837650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.837700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.837728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.853448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.853499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.853527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.074 [2024-11-04 17:21:49.870900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.074 [2024-11-04 17:21:49.870940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.074 [2024-11-04 17:21:49.870953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:49.888561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:49.888613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:49.888641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:49.904980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:49.905033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:49.905061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:49.922195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:49.922293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:49.922323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:49.940645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:49.940685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:49.940698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:49.958694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:49.958745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:49.958774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:49.975515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:49.975551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:49.975579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:49.992852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:49.992905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:49.992934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.009959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.009999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.010012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.027868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.027915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.027927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.045510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.045562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.045591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.062440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.062493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.062520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.080260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.080303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.080333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.097077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.097129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.097158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.113500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.113553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.113581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.334 [2024-11-04 17:21:50.131104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.334 [2024-11-04 17:21:50.131156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.334 [2024-11-04 17:21:50.131186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.148657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.148709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.148738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.165277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.165313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.182723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.182777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 15434.00 IOPS, 60.29 MiB/s [2024-11-04T17:21:50.398Z] [2024-11-04 17:21:50.200494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.200543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.200572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.216379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.216431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.216459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.233179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.233237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.233267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.257176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.257252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.257266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.273255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.273307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.273336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.594 [2024-11-04 17:21:50.290672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.594 [2024-11-04 17:21:50.290711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.594 [2024-11-04 17:21:50.290723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.595 [2024-11-04 17:21:50.307682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.595 [2024-11-04 17:21:50.307735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.595 [2024-11-04 17:21:50.307764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.595 [2024-11-04 17:21:50.323578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.595 [2024-11-04 17:21:50.323614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.595 [2024-11-04 17:21:50.323644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.595 [2024-11-04 17:21:50.340267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.595 [2024-11-04 17:21:50.340326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.595 [2024-11-04 17:21:50.340354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.595 [2024-11-04 17:21:50.357192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.595 [2024-11-04 17:21:50.357255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.595 [2024-11-04 17:21:50.357284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.595 [2024-11-04 17:21:50.373274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.595 [2024-11-04 17:21:50.373327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.595 [2024-11-04 17:21:50.373354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.595 [2024-11-04 17:21:50.389454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.595 [2024-11-04 17:21:50.389509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.595 [2024-11-04 17:21:50.389523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.407536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.407573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.407585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.425544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.425592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.425621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.444078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.444133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.444147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.462892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.462932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.462945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.481268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.481322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.481352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.499077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.499132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.499161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.516892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.516931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.516944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.534270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.534324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.534337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.551192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.551254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.551285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.568481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.568518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.568547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.586023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.586062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.586075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.603854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.603921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.620770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.620833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.620862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.636565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.636618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.636647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.854 [2024-11-04 17:21:50.652015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:49.854 [2024-11-04 17:21:50.652085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.854 [2024-11-04 17:21:50.652097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.114 [2024-11-04 17:21:50.668495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.114 [2024-11-04 17:21:50.668547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.114 [2024-11-04 17:21:50.668575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.114 [2024-11-04 17:21:50.683911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.114 [2024-11-04 17:21:50.683963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.114 [2024-11-04 17:21:50.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.114 [2024-11-04 17:21:50.699615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.114 [2024-11-04 17:21:50.699666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.114 [2024-11-04 17:21:50.699693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.114 [2024-11-04 17:21:50.715458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.114 [2024-11-04 17:21:50.715508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.114 [2024-11-04 17:21:50.715536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.114 [2024-11-04 17:21:50.730811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.114 [2024-11-04 17:21:50.730863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.114 [2024-11-04 17:21:50.730891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.114 [2024-11-04 17:21:50.746003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.114 [2024-11-04 17:21:50.746039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.114 [2024-11-04 17:21:50.746067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.114 [2024-11-04 17:21:50.761231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.114 [2024-11-04 17:21:50.761280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.761309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.776662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.776714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.792315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.792365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.792393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.807901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.807952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.807979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.823279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.823317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.823345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.839070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.839130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.839159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.854837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.854889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.854901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.871188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.871247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.871275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.886568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.886617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.886644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.115 [2024-11-04 17:21:50.901836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.115 [2024-11-04 17:21:50.901920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.115 [2024-11-04 17:21:50.901949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:50.917852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:50.917940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:50.917953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:50.933521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:50.933572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:50.933599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:50.949751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:50.949787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:50.949815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:50.967774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:50.967812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:50.967826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:50.984873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:50.984926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:50.984955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.001017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.001068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.001096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.016721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.016772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.016800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.032118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.032169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.032196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.047494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.047529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.047557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.062805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.062856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.062884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.079168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.079242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.079255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.094748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.094797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.094825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.110063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.110099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.374 [2024-11-04 17:21:51.110127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.374 [2024-11-04 17:21:51.126271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.374 [2024-11-04 17:21:51.126331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.375 [2024-11-04 17:21:51.126375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.375 [2024-11-04 17:21:51.141498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.375 [2024-11-04 17:21:51.141533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.375 [2024-11-04 17:21:51.141561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.375 [2024-11-04 17:21:51.156658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.375 [2024-11-04 17:21:51.156709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.375 [2024-11-04 17:21:51.156737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.375 [2024-11-04 17:21:51.171894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.375 [2024-11-04 17:21:51.171944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.375 [2024-11-04 17:21:51.171971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.634 [2024-11-04 17:21:51.188415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.634 [2024-11-04 17:21:51.188467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.634 [2024-11-04 17:21:51.188495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.634 15433.50 IOPS, 60.29 MiB/s [2024-11-04T17:21:51.438Z] [2024-11-04 17:21:51.206465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd9c370) 00:20:50.634 [2024-11-04 17:21:51.206503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:50.634 [2024-11-04 17:21:51.206531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:50.634 00:20:50.634 Latency(us) 00:20:50.634 [2024-11-04T17:21:51.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.634 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:50.634 nvme0n1 : 2.01 15474.22 60.45 0.00 0.00 8264.70 7179.17 31218.97 00:20:50.634 [2024-11-04T17:21:51.438Z] =================================================================================================================== 00:20:50.634 [2024-11-04T17:21:51.438Z] Total : 15474.22 60.45 0.00 0.00 8264.70 7179.17 31218.97 00:20:50.634 { 00:20:50.634 "results": [ 00:20:50.634 { 00:20:50.634 "job": "nvme0n1", 00:20:50.634 "core_mask": "0x2", 00:20:50.634 "workload": "randread", 00:20:50.634 "status": "finished", 00:20:50.634 "queue_depth": 128, 00:20:50.634 "io_size": 4096, 00:20:50.634 "runtime": 2.011216, 00:20:50.634 "iops": 15474.22057103762, 00:20:50.634 "mibps": 60.446174105615704, 00:20:50.634 "io_failed": 0, 00:20:50.634 "io_timeout": 0, 00:20:50.634 "avg_latency_us": 8264.70004568531, 00:20:50.634 "min_latency_us": 7179.170909090909, 00:20:50.634 "max_latency_us": 31218.967272727274 00:20:50.634 } 00:20:50.634 ], 00:20:50.634 "core_count": 1 00:20:50.634 } 00:20:50.634 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:50.634 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:50.634 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:50.634 | .driver_specific 00:20:50.634 | .nvme_error 00:20:50.634 | .status_code 00:20:50.634 | .command_transient_transport_error' 00:20:50.634 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80244 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80244 ']' 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80244 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80244 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:50.937 killing process with pid 80244 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80244' 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80244 00:20:50.937 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.937 00:20:50.937 Latency(us) 00:20:50.937 [2024-11-04T17:21:51.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.937 [2024-11-04T17:21:51.741Z] =================================================================================================================== 00:20:50.937 [2024-11-04T17:21:51.741Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.937 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80244 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80297 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80297 /var/tmp/bperf.sock 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80297 ']' 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:51.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:51.196 17:21:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.196 [2024-11-04 17:21:51.768298] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:51.196 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:51.196 Zero copy mechanism will not be used. 00:20:51.196 [2024-11-04 17:21:51.768393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80297 ] 00:20:51.196 [2024-11-04 17:21:51.912066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.196 [2024-11-04 17:21:51.974119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.455 [2024-11-04 17:21:52.035616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:51.455 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:51.455 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:20:51.455 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.455 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.719 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:51.719 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.719 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.719 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.719 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.719 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.978 nvme0n1 00:20:51.978 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:51.978 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.978 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.978 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.978 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:51.978 17:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:52.237 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:52.237 Zero copy mechanism will not be used. 00:20:52.237 Running I/O for 2 seconds... 00:20:52.238 [2024-11-04 17:21:52.839921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.839984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.840001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.844223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.844265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.844279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.848586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.848645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.848660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.853120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.853161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.853191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.857604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.857662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.857694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.862062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.862105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.862120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.866403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.866441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.866471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.870777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.870835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.870849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.875237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.875295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.875309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.879490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.879529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.879559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.883817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.883857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.883887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.888163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.888236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.888251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.892519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.892558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.892587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.896883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.896923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.896952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.901318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.901356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.901386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.905602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.905658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.905687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.909861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.238 [2024-11-04 17:21:52.909926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.238 [2024-11-04 17:21:52.909941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.238 [2024-11-04 17:21:52.914166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.914222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.914237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.918507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.918547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.918577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.922942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.922984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.923014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.927284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.927324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.927353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.931791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.931833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.931864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.936137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.936178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.936208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.940674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.940713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.940742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.944997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.945037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.945068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.949422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.949461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.949490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.953739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.953779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.958054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.958095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.958109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.962455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.962494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.962524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.966731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.966771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.966801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.972290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.972331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.972360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.976567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.976605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.976634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.980810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.980850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.980879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.985087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.239 [2024-11-04 17:21:52.985126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.239 [2024-11-04 17:21:52.985156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.239 [2024-11-04 17:21:52.989482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:52.989519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:52.989549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:52.993876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:52.993939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:52.993953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:52.998292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:52.998346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:52.998376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.002771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.002812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.002826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.007169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.007237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.007252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.011603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.011660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.011674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.016092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.016133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.016163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.020574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.020612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.020640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.025042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.025095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.025124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.029450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.029488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.029517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.033870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.033942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.033957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.240 [2024-11-04 17:21:53.038560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.240 [2024-11-04 17:21:53.038614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.240 [2024-11-04 17:21:53.038642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.043116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.043153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.043182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.047706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.047743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.047772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.051866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.051905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.051933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.056061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.056098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.056127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.060595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.060653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.060667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.065318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.065360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.065374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.069812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.069853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.069868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.074315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.074356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.074371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.078740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.078783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.078797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.083174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.083231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.083263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.087718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.087776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.087790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.092249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.092313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.092343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.096759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.096817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.096847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.101202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.101266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.101297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.105601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.105673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.105703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.109868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.109936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.109951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.114392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.501 [2024-11-04 17:21:53.114446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.501 [2024-11-04 17:21:53.114476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.501 [2024-11-04 17:21:53.118634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.118689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.118717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.122950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.123005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.123035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.127670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.127712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.127726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.132363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.132402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.132431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.136901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.136958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.136975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.141532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.141587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.141618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.146108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.146150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.146163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.150615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.150684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.150713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.154981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.155049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.155077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.159420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.159474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.159502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.163811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.163867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.163897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.168257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.168313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.168343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.172468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.172541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.172590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.176686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.176740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.176769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.180894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.180952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.180981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.185433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.185486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.185516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.190010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.190054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.190068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.194608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.194695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.194709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.199149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.199204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.199249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.203396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.203450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.203479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.207879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.207933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.207961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.212808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.212852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.212866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.217076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.217130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.217160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.221255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.221308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.221337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.225365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.225417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.225447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.229505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.229559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.229588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.233640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.502 [2024-11-04 17:21:53.233693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.502 [2024-11-04 17:21:53.233722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.502 [2024-11-04 17:21:53.238149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.238192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.238218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.242925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.242998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.243028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.247479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.247531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.247560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.251992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.252076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.252104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.256510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.256548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.256577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.260935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.260990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.261020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.265208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.265274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.265305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.269441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.269493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.269522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.273687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.273741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.273770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.277849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.277942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.277974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.282132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.282187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.282200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.286708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.286764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.286778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.291372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.291427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.291476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.295998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.296068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.296096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.503 [2024-11-04 17:21:53.300822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.503 [2024-11-04 17:21:53.300880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.503 [2024-11-04 17:21:53.300910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.305543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.305598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.305627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.310265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.310326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.310355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.314504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.314556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.314585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.319079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.319162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.319192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.323699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.323756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.323769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.327917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.327971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.328000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.332057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.332111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.332139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.336195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.336260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.336289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.340647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.340702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.340731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.345134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.345187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.345216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.349535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.349589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.349618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.353987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.354028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.354041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.358476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.358515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.358544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.362868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.362924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.362954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.367297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.367350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.367379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.371530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.371583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.371611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.375698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.375751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.375780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.379849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.379902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.379931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.383983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.384037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.384065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.388295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.388348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.388378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.764 [2024-11-04 17:21:53.392841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.764 [2024-11-04 17:21:53.392897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.764 [2024-11-04 17:21:53.392911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.397361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.397413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.397442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.401817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.401873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.401912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.406252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.406296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.406326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.410720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.410776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.410807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.415054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.415108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.415136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.419248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.419300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.419328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.423384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.423437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.423465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.427500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.427552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.427582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.431771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.431825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.431853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.436180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.436243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.436273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.440611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.440699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.440729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.445123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.445178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.445207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.449556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.449610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.449660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.454005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.454054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.454069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.458398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.458435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.458464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.462547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.462615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.462644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.466631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.466684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.466712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.470790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.470845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.470873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.475084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.475138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.475167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.479467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.479522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.479551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.484337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.484404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.489135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.489188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.489252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.494167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.494221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.494236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.498986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.499055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.499084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.503847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.503904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.503934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.508533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.508577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.508591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.513071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.765 [2024-11-04 17:21:53.513125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.765 [2024-11-04 17:21:53.513154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.765 [2024-11-04 17:21:53.517506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.517563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.517576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.522105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.522147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.522161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.526946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.526988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.527002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.531516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.531570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.531600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.536095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.536149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.536179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.540616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.540687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.540717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.545170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.545252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.545283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.549581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.549645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.549674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.554325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.554377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.554407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.558658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.558710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.558738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.766 [2024-11-04 17:21:53.563311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:52.766 [2024-11-04 17:21:53.563364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.766 [2024-11-04 17:21:53.563379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.026 [2024-11-04 17:21:53.567723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.026 [2024-11-04 17:21:53.567778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.026 [2024-11-04 17:21:53.567807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.026 [2024-11-04 17:21:53.572182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.026 [2024-11-04 17:21:53.572267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.026 [2024-11-04 17:21:53.572281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.026 [2024-11-04 17:21:53.576508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.026 [2024-11-04 17:21:53.576563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.576608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.581354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.581392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.581421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.586055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.586096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.586110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.590697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.590755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.590785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.595314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.595396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.595425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.599926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.599984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.599998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.604590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.604660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.604689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.609120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.609175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.609205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.613473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.613526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.613555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.617682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.617737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.617766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.621876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.621957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.621972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.626000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.626038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.626068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.630349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.630416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.630445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.634468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.634521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.634551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.638645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.638698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.638727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.642906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.642963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.642994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.647272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.647337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.647366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.651564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.651618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.651663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.656056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.656097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.656111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.660320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.660374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.660403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.664680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.664735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.664764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.669285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.669340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.669353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.673619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.673674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.673689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.677976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.678016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.682548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.682635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.682666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.687318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.687383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.687412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.691681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.691736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.027 [2024-11-04 17:21:53.691766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.027 [2024-11-04 17:21:53.695899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.027 [2024-11-04 17:21:53.695970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.696000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.700385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.700453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.700466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.704675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.704730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.704759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.708982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.709064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.709093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.713279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.713331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.713359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.717479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.717545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.717574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.721620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.721673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.721702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.725957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.726005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.726034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.730067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.730105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.730135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.734229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.734338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.738929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.738969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.743503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.743558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.743587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.747963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.748023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.748054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.752473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.752512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.752541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.756984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.757084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.757114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.761515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.761569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.761598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.765943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.765983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.765998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.770329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.770395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.770424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.774734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.774788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.774818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.779127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.779182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.779212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.783485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.783539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.783568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.787837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.787890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.787919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.792443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.792496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.792525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.797026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.797097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.797126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.801370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.801422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.801450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.805643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.805716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.805747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.809810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.809865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.809928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.028 [2024-11-04 17:21:53.813876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.028 [2024-11-04 17:21:53.813940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.028 [2024-11-04 17:21:53.813970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.029 [2024-11-04 17:21:53.818015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.029 [2024-11-04 17:21:53.818054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.029 [2024-11-04 17:21:53.818067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.029 [2024-11-04 17:21:53.821983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.029 [2024-11-04 17:21:53.822038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.029 [2024-11-04 17:21:53.822067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.029 [2024-11-04 17:21:53.826454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.029 [2024-11-04 17:21:53.826508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.029 [2024-11-04 17:21:53.826521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.830773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.830825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.830853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.289 6975.00 IOPS, 871.88 MiB/s [2024-11-04T17:21:54.093Z] [2024-11-04 17:21:53.836600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.836664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.836693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.840978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.841031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.841059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.845428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.845483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.845496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.850012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.850053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.850066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.854498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.854536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.854566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.858828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.858885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.858899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.863200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.863266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.863296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.867579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.867632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.867661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.871694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.871747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.871775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.875820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.875873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.875901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.879971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.289 [2024-11-04 17:21:53.880025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.289 [2024-11-04 17:21:53.880053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.289 [2024-11-04 17:21:53.884126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.884182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.884210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.888311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.888363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.888393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.892882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.892939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.892953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.897403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.897456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.897485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.901845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.901927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.901942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.906208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.906269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.906299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.910527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.910564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.910594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.914850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.914905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.914934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.919043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.919097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.919125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.923306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.923360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.923388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.927455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.927509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.927537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.931570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.931623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.931652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.936090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.936145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.936174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.940577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.940634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.940649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.945205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.945270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.945300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.949844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.949923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.949952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.954509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.954563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.954592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.959192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.959288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.959319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.963713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.963767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.963797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.967954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.968008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.968037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.972241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.972295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.972324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.976546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.976584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.976598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.980732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.980785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.980815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.985157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.985202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.985243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.989693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.989733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.989748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.994330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.994368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.994397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.290 [2024-11-04 17:21:53.998822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.290 [2024-11-04 17:21:53.998877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.290 [2024-11-04 17:21:53.998891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.003247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.003313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.003344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.007675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.007731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.007761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.012107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.012162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.012191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.016492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.016546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.016575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.021058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.021114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.021143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.025542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.025615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.025629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.030086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.030127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.030141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.034572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.034643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.034657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.038992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.039034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.039048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.043411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.043477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.043507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.047995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.048095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.048124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.052583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.052653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.052667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.057149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.057234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.057249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.061699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.061738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.061752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.066063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.066104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.066118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.070393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.070430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.070459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.074572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.074641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.074670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.079132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.079186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.079215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.083390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.083443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.083472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.291 [2024-11-04 17:21:54.087849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.291 [2024-11-04 17:21:54.087905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.291 [2024-11-04 17:21:54.087919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.092213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.092281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.092311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.096742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.096783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.096797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.101369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.101424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.101437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.105875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.105925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.105939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.110393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.110445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.110474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.114894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.114952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.114965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.119439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.119477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.119506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.124011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.124064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.124093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.128203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.128269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.128298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.132393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.132445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.132473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.136829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.136912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.141270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.141333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.141363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.145738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.145794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.145808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.150333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.150389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.150403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.154891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.154948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.154962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.159377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.159413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.159442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.163815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.163872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.163885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.168424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.168462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.168491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.172851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.172905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.172935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.177348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.177400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.177429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.181739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.181791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.181820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.186041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.186083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.186097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.190780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.190835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.190866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.195469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.195522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.195551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.200074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.200127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.200155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.552 [2024-11-04 17:21:54.204537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.552 [2024-11-04 17:21:54.204579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.552 [2024-11-04 17:21:54.204593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.209363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.209431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.209444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.213971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.214013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.214027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.218308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.218403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.222697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.222750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.222778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.227063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.227115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.227143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.231430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.231483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.231511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.235806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.235860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.235890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.240070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.240122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.240150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.244276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.244309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.244338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.248369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.248405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.248433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.252373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.252410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.252438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.256839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.256909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.256938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.261381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.261433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.261461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.265573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.265623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.265668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.269859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.269936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.269966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.274234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.274295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.274308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.278441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.278491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.278519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.282661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.282714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.282742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.287045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.287096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.287124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.291199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.291259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.291288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.295368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.295417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.295446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.299452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.299502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.299530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.303727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.303779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.303807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.307989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.308042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.308070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.312640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.312713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.312726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.317184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.317249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.317278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.321642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.553 [2024-11-04 17:21:54.321719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.553 [2024-11-04 17:21:54.321748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.553 [2024-11-04 17:21:54.326387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.554 [2024-11-04 17:21:54.326440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.554 [2024-11-04 17:21:54.326468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.554 [2024-11-04 17:21:54.330895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.554 [2024-11-04 17:21:54.330970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.554 [2024-11-04 17:21:54.331015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.554 [2024-11-04 17:21:54.335374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.554 [2024-11-04 17:21:54.335427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.554 [2024-11-04 17:21:54.335456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.554 [2024-11-04 17:21:54.339802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.554 [2024-11-04 17:21:54.339857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.554 [2024-11-04 17:21:54.339886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.554 [2024-11-04 17:21:54.344154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.554 [2024-11-04 17:21:54.344235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.554 [2024-11-04 17:21:54.344250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.554 [2024-11-04 17:21:54.348408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.554 [2024-11-04 17:21:54.348475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.554 [2024-11-04 17:21:54.348503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.353142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.353198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.353249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.357709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.357793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.357822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.362108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.362148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.362162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.366961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.367031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.367064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.371642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.371698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.371712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.376340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.376392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.376420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.380808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.380864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.380894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.385447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.385527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.389940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.389980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.389995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.394297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.394355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.394382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.398325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.398361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.398389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.402454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.402489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.402516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.406808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.406861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.406889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.411592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.814 [2024-11-04 17:21:54.411645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.814 [2024-11-04 17:21:54.411658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.814 [2024-11-04 17:21:54.415959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.416058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.420810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.420879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.420895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.425445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.425496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.425524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.429929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.429970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.429985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.434572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.434640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.434669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.438890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.438960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.438991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.443258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.443355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.443384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.447658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.447712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.447742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.452077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.452133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.452162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.456547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.456601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.456629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.460901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.460987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.461018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.465292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.465390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.465418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.469811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.469867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.469937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.474573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.474641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.474671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.479014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.479081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.479111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.483436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.483489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.483518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.487797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.487853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.487884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.492222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.492286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.492316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.496596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.496666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.496696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.501077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.501132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.501161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.505701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.505755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.505785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.510018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.510055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.510068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.514643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.514714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.514744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.519330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.519404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.519419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.523983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.524037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.524081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.528865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.528936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.528951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.533713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.533768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.815 [2024-11-04 17:21:54.533797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.815 [2024-11-04 17:21:54.538500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.815 [2024-11-04 17:21:54.538541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.538569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.542847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.542903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.542932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.547153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.547236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.547250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.551274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.551325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.551353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.555352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.555421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.555450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.559601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.559653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.559681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.563879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.563932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.563962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.568098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.568152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.568181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.572564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.572617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.572661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.577347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.577400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.577428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.581828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.581891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.581906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.586408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.586445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.586474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.590972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.591055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.591085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.595435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.595470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.595499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.599866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.599903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.599931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.604191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.604257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.604271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.608279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.608314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.608343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.816 [2024-11-04 17:21:54.612446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:53.816 [2024-11-04 17:21:54.612482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.816 [2024-11-04 17:21:54.612511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.076 [2024-11-04 17:21:54.616733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.076 [2024-11-04 17:21:54.616771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.076 [2024-11-04 17:21:54.616800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.076 [2024-11-04 17:21:54.621239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.076 [2024-11-04 17:21:54.621289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.076 [2024-11-04 17:21:54.621321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.076 [2024-11-04 17:21:54.625772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.625994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.626014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.630517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.630557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.630600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.635055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.635092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.635121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.639361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.639397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.639426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.643682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.643722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.643751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.647954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.648022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.648050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.652174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.652239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.652253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.656253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.656300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.656329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.660332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.660398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.664605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.664641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.664669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.669221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.669275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.669291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.673793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.673834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.673864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.678194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.678247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.678262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.682425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.682490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.686779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.686817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.686846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.691130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.691169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.691198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.695512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.695548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.695577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.700007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.700075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.700104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.704526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.704564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.704593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.709122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.709161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.709190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.713576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.713615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.713644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.718336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.718372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.718401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.722912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.722985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.723031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.727537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.727574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.727602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.732113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.732152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.732181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.736478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.736515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.736543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.077 [2024-11-04 17:21:54.740774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.077 [2024-11-04 17:21:54.740813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.077 [2024-11-04 17:21:54.740842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.745013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.745050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.745078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.749194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.749258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.749272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.753244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.753280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.753309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.757183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.757445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.757463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.761776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.761817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.761846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.765881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.765962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.765976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.770091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.770133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.770147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.774375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.774410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.774438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.778569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.778637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.778666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.783171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.783241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.783257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.787544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.787613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.791907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.791949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.791963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.796451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.796492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.796506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.800918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.800974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.801019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.805545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.805585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.805615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.810019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.810060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.810074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.814681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.814719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.814749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.819438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.819475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.819504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.824048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.824087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.824117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:54.078 [2024-11-04 17:21:54.828540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.828577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.828607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:54.078 6983.00 IOPS, 872.88 MiB/s [2024-11-04T17:21:54.882Z] [2024-11-04 17:21:54.834500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe00400) 00:20:54.078 [2024-11-04 17:21:54.834540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.078 [2024-11-04 17:21:54.834569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:54.078 00:20:54.078 Latency(us) 00:20:54.078 [2024-11-04T17:21:54.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.078 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:54.078 nvme0n1 : 2.00 6979.59 872.45 0.00 0.00 2288.63 1765.00 6106.76 00:20:54.078 [2024-11-04T17:21:54.882Z] =================================================================================================================== 00:20:54.078 [2024-11-04T17:21:54.882Z] Total : 6979.59 872.45 0.00 0.00 2288.63 1765.00 6106.76 00:20:54.078 { 00:20:54.078 "results": [ 00:20:54.078 { 00:20:54.078 "job": "nvme0n1", 00:20:54.078 "core_mask": "0x2", 00:20:54.078 "workload": "randread", 00:20:54.078 "status": "finished", 00:20:54.078 "queue_depth": 16, 00:20:54.078 "io_size": 131072, 00:20:54.078 "runtime": 2.003269, 00:20:54.078 "iops": 6979.591857109554, 00:20:54.078 "mibps": 872.4489821386943, 00:20:54.078 "io_failed": 0, 00:20:54.078 "io_timeout": 0, 00:20:54.078 "avg_latency_us": 2288.6326097189894, 00:20:54.078 "min_latency_us": 1765.0036363636364, 00:20:54.078 "max_latency_us": 6106.763636363637 00:20:54.078 } 00:20:54.078 ], 00:20:54.078 "core_count": 1 00:20:54.078 } 00:20:54.078 17:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:54.078 17:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:54.078 17:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:54.078 17:21:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:54.078 | .driver_specific 00:20:54.078 | .nvme_error 00:20:54.078 | .status_code 00:20:54.078 | .command_transient_transport_error' 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 451 > 0 )) 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80297 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80297 ']' 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80297 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80297 00:20:54.646 killing process with pid 80297 00:20:54.646 Received shutdown signal, test time was about 2.000000 seconds 00:20:54.646 00:20:54.646 Latency(us) 00:20:54.646 [2024-11-04T17:21:55.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.646 [2024-11-04T17:21:55.450Z] =================================================================================================================== 00:20:54.646 [2024-11-04T17:21:55.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80297' 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80297 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80297 00:20:54.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80345 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80345 /var/tmp/bperf.sock 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80345 ']' 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:54.646 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:54.646 [2024-11-04 17:21:55.442076] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:54.646 [2024-11-04 17:21:55.442417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80345 ] 00:20:54.906 [2024-11-04 17:21:55.590885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.906 [2024-11-04 17:21:55.662412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.164 [2024-11-04 17:21:55.739528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:55.164 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.165 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:20:55.165 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:55.165 17:21:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:55.424 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:55.424 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.424 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:55.424 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.424 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:55.424 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:55.993 nvme0n1 00:20:55.993 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:55.993 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.993 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:55.993 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.993 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:55.993 17:21:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:55.993 Running I/O for 2 seconds... 00:20:55.993 [2024-11-04 17:21:56.778496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fef90 00:20:55.993 [2024-11-04 17:21:56.781670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:55.993 [2024-11-04 17:21:56.781914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.798159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166feb58 00:20:56.253 [2024-11-04 17:21:56.801161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.801442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.817079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fe2e8 00:20:56.253 [2024-11-04 17:21:56.820232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.820275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.837124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fda78 00:20:56.253 [2024-11-04 17:21:56.839927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.839964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.856960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fd208 00:20:56.253 [2024-11-04 17:21:56.860006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.860058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.877181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fc998 00:20:56.253 [2024-11-04 17:21:56.880322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.880381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.897283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fc128 00:20:56.253 [2024-11-04 17:21:56.900294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.900361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.917228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fb8b8 00:20:56.253 [2024-11-04 17:21:56.920168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.920230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.936716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fb048 00:20:56.253 [2024-11-04 17:21:56.939712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.939795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.956075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fa7d8 00:20:56.253 [2024-11-04 17:21:56.959223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.959445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.976067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f9f68 00:20:56.253 [2024-11-04 17:21:56.978912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.979102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:56.995610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f96f8 00:20:56.253 [2024-11-04 17:21:56.998446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.253 [2024-11-04 17:21:56.998486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:56.253 [2024-11-04 17:21:57.015329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f8e88 00:20:56.254 [2024-11-04 17:21:57.018274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.254 [2024-11-04 17:21:57.018309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:56.254 [2024-11-04 17:21:57.035207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f8618 00:20:56.254 [2024-11-04 17:21:57.038035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.254 [2024-11-04 17:21:57.038076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.055500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f7da8 00:20:56.513 [2024-11-04 17:21:57.058287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.058338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.074703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f7538 00:20:56.513 [2024-11-04 17:21:57.077395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.077456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.093821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f6cc8 00:20:56.513 [2024-11-04 17:21:57.096336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.096393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.111895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f6458 00:20:56.513 [2024-11-04 17:21:57.114127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.114167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.130637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f5be8 00:20:56.513 [2024-11-04 17:21:57.133159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.133273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.150674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f5378 00:20:56.513 [2024-11-04 17:21:57.153287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.153339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.170583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f4b08 00:20:56.513 [2024-11-04 17:21:57.173168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.173231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.189700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f4298 00:20:56.513 [2024-11-04 17:21:57.192110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.192153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.208541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f3a28 00:20:56.513 [2024-11-04 17:21:57.210920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.211149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.227708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f31b8 00:20:56.513 [2024-11-04 17:21:57.230041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.230238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.246659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f2948 00:20:56.513 [2024-11-04 17:21:57.249157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.249390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.265497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f20d8 00:20:56.513 [2024-11-04 17:21:57.268054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.268271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.284284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f1868 00:20:56.513 [2024-11-04 17:21:57.286579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.286763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:56.513 [2024-11-04 17:21:57.302764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f0ff8 00:20:56.513 [2024-11-04 17:21:57.305159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.513 [2024-11-04 17:21:57.305359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.321207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f0788 00:20:56.803 [2024-11-04 17:21:57.323666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.323896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.340933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eff18 00:20:56.803 [2024-11-04 17:21:57.343416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.343605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.360407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ef6a8 00:20:56.803 [2024-11-04 17:21:57.362562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.362724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.379056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eee38 00:20:56.803 [2024-11-04 17:21:57.381379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.381437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.398126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ee5c8 00:20:56.803 [2024-11-04 17:21:57.400327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.400373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.417187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166edd58 00:20:56.803 [2024-11-04 17:21:57.419637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.419691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.436994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ed4e8 00:20:56.803 [2024-11-04 17:21:57.439444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.439722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.457506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ecc78 00:20:56.803 [2024-11-04 17:21:57.459569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.459619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.477671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ec408 00:20:56.803 [2024-11-04 17:21:57.479902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.479949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.497167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ebb98 00:20:56.803 [2024-11-04 17:21:57.499344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.499549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.512031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eb328 00:20:56.803 [2024-11-04 17:21:57.513817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.513851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.526277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eaab8 00:20:56.803 [2024-11-04 17:21:57.527900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.527933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.539765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ea248 00:20:56.803 [2024-11-04 17:21:57.541457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.541486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.554341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e99d8 00:20:56.803 [2024-11-04 17:21:57.556143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.556192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:56.803 [2024-11-04 17:21:57.570952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e9168 00:20:56.803 [2024-11-04 17:21:57.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.803 [2024-11-04 17:21:57.572849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:56.804 [2024-11-04 17:21:57.588332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e88f8 00:20:57.087 [2024-11-04 17:21:57.590060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.087 [2024-11-04 17:21:57.590097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:57.087 [2024-11-04 17:21:57.605055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e8088 00:20:57.088 [2024-11-04 17:21:57.606832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.606865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.621778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e7818 00:20:57.088 [2024-11-04 17:21:57.623414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.623460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.636702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e6fa8 00:20:57.088 [2024-11-04 17:21:57.638485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.638534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.652094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e6738 00:20:57.088 [2024-11-04 17:21:57.653752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.653784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.667621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e5ec8 00:20:57.088 [2024-11-04 17:21:57.669241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.669296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.683109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e5658 00:20:57.088 [2024-11-04 17:21:57.684771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.684832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.699061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e4de8 00:20:57.088 [2024-11-04 17:21:57.700650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.700695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.715117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e4578 00:20:57.088 [2024-11-04 17:21:57.716705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.716750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.731164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e3d08 00:20:57.088 [2024-11-04 17:21:57.732709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.732755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.746990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e3498 00:20:57.088 [2024-11-04 17:21:57.749644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.749689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:57.088 13664.00 IOPS, 53.38 MiB/s [2024-11-04T17:21:57.892Z] [2024-11-04 17:21:57.763726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e2c28 00:20:57.088 [2024-11-04 17:21:57.765216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.765271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.780070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e23b8 00:20:57.088 [2024-11-04 17:21:57.781589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.781635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.796689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e1b48 00:20:57.088 [2024-11-04 17:21:57.798170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.798203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.814061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e12d8 00:20:57.088 [2024-11-04 17:21:57.815523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.815566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.831344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e0a68 00:20:57.088 [2024-11-04 17:21:57.832921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.832968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.849105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e01f8 00:20:57.088 [2024-11-04 17:21:57.850618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.850682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.865943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166df988 00:20:57.088 [2024-11-04 17:21:57.867342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.867377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:57.088 [2024-11-04 17:21:57.881843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166df118 00:20:57.088 [2024-11-04 17:21:57.883303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.088 [2024-11-04 17:21:57.883391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:57.898052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166de8a8 00:20:57.348 [2024-11-04 17:21:57.899481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:57.899527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:57.913581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166de038 00:20:57.348 [2024-11-04 17:21:57.914988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:57.915050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:57.935713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166de038 00:20:57.348 [2024-11-04 17:21:57.938164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:57.938213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:57.951064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166de8a8 00:20:57.348 [2024-11-04 17:21:57.953612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:57.953657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:57.967214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166df118 00:20:57.348 [2024-11-04 17:21:57.969721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:57.969766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:57.982593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166df988 00:20:57.348 [2024-11-04 17:21:57.984987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:57.985047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:57.998053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e01f8 00:20:57.348 [2024-11-04 17:21:58.000555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.000599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.013764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e0a68 00:20:57.348 [2024-11-04 17:21:58.016237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.016291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.029695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e12d8 00:20:57.348 [2024-11-04 17:21:58.032245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.032300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.044787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e1b48 00:20:57.348 [2024-11-04 17:21:58.047193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.047245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.059658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e23b8 00:20:57.348 [2024-11-04 17:21:58.062050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.062084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.074678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e2c28 00:20:57.348 [2024-11-04 17:21:58.076899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.076944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.089698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e3498 00:20:57.348 [2024-11-04 17:21:58.091993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.092036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.103522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e3d08 00:20:57.348 [2024-11-04 17:21:58.105605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.105649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.117825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e4578 00:20:57.348 [2024-11-04 17:21:58.120111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.120154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.133865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e4de8 00:20:57.348 [2024-11-04 17:21:58.136376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.348 [2024-11-04 17:21:58.136427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:57.348 [2024-11-04 17:21:58.149685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e5658 00:20:57.608 [2024-11-04 17:21:58.152063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.152108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.164686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e5ec8 00:20:57.608 [2024-11-04 17:21:58.166759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.166803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.178375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e6738 00:20:57.608 [2024-11-04 17:21:58.180329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.180351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.192023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e6fa8 00:20:57.608 [2024-11-04 17:21:58.194005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.194035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.205408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e7818 00:20:57.608 [2024-11-04 17:21:58.207350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.207394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.218779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e8088 00:20:57.608 [2024-11-04 17:21:58.220713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.220754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.233871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e88f8 00:20:57.608 [2024-11-04 17:21:58.235875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.235917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.247289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e9168 00:20:57.608 [2024-11-04 17:21:58.249130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.249173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.260957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166e99d8 00:20:57.608 [2024-11-04 17:21:58.263189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.263255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.275131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ea248 00:20:57.608 [2024-11-04 17:21:58.276966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.277011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.289855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eaab8 00:20:57.608 [2024-11-04 17:21:58.291862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.291906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.303532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eb328 00:20:57.608 [2024-11-04 17:21:58.305320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.305363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.316991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ebb98 00:20:57.608 [2024-11-04 17:21:58.318931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.318974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.330838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ec408 00:20:57.608 [2024-11-04 17:21:58.332712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.332754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.344611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ecc78 00:20:57.608 [2024-11-04 17:21:58.346463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.346507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.358153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ed4e8 00:20:57.608 [2024-11-04 17:21:58.359995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.360040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.372134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166edd58 00:20:57.608 [2024-11-04 17:21:58.373850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.373918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.385347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ee5c8 00:20:57.608 [2024-11-04 17:21:58.387046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.387089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:57.608 [2024-11-04 17:21:58.398783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eee38 00:20:57.608 [2024-11-04 17:21:58.400510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.608 [2024-11-04 17:21:58.400553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:57.868 [2024-11-04 17:21:58.413027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166ef6a8 00:20:57.868 [2024-11-04 17:21:58.415049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.868 [2024-11-04 17:21:58.415092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:57.868 [2024-11-04 17:21:58.427329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166eff18 00:20:57.868 [2024-11-04 17:21:58.429156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.868 [2024-11-04 17:21:58.429202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.442553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f0788 00:20:57.869 [2024-11-04 17:21:58.444405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.444465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.458436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f0ff8 00:20:57.869 [2024-11-04 17:21:58.460257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.460312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.473943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f1868 00:20:57.869 [2024-11-04 17:21:58.475768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.475811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.489742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f20d8 00:20:57.869 [2024-11-04 17:21:58.491650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.491679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.505528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f2948 00:20:57.869 [2024-11-04 17:21:58.507262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.507320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.520961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f31b8 00:20:57.869 [2024-11-04 17:21:58.522824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.522867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.536371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f3a28 00:20:57.869 [2024-11-04 17:21:58.538131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.538177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.552804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f4298 00:20:57.869 [2024-11-04 17:21:58.554621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.554664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.568042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f4b08 00:20:57.869 [2024-11-04 17:21:58.569862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.569939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.584081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f5378 00:20:57.869 [2024-11-04 17:21:58.585949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.585981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.600920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f5be8 00:20:57.869 [2024-11-04 17:21:58.602673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.602705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.617327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f6458 00:20:57.869 [2024-11-04 17:21:58.618945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.618977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.633741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f6cc8 00:20:57.869 [2024-11-04 17:21:58.635367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.635397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.649401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f7538 00:20:57.869 [2024-11-04 17:21:58.650994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.651040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:57.869 [2024-11-04 17:21:58.664665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f7da8 00:20:57.869 [2024-11-04 17:21:58.666311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:57.869 [2024-11-04 17:21:58.666343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:58.128 [2024-11-04 17:21:58.680864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f8618 00:20:58.128 [2024-11-04 17:21:58.682515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.128 [2024-11-04 17:21:58.682545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:58.128 [2024-11-04 17:21:58.697454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f8e88 00:20:58.128 [2024-11-04 17:21:58.698995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.128 [2024-11-04 17:21:58.699050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:58.128 [2024-11-04 17:21:58.714444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f96f8 00:20:58.128 [2024-11-04 17:21:58.716040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.128 [2024-11-04 17:21:58.716090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:58.128 [2024-11-04 17:21:58.730275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166f9f68 00:20:58.128 [2024-11-04 17:21:58.731908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.128 [2024-11-04 17:21:58.731957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:58.128 [2024-11-04 17:21:58.747058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727750) with pdu=0x2000166fa7d8 00:20:58.128 [2024-11-04 17:21:58.749662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.128 [2024-11-04 17:21:58.749697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:58.128 15118.00 IOPS, 59.05 MiB/s 00:20:58.128 Latency(us) 00:20:58.128 [2024-11-04T17:21:58.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.128 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:58.128 nvme0n1 : 2.00 15159.34 59.22 0.00 0.00 8435.40 6315.29 37653.41 00:20:58.128 [2024-11-04T17:21:58.932Z] =================================================================================================================== 00:20:58.128 [2024-11-04T17:21:58.932Z] Total : 15159.34 59.22 0.00 0.00 8435.40 6315.29 37653.41 00:20:58.128 { 00:20:58.129 "results": [ 00:20:58.129 { 00:20:58.129 "job": "nvme0n1", 00:20:58.129 "core_mask": "0x2", 00:20:58.129 "workload": "randwrite", 00:20:58.129 "status": "finished", 00:20:58.129 "queue_depth": 128, 00:20:58.129 "io_size": 4096, 00:20:58.129 "runtime": 2.002989, 00:20:58.129 "iops": 15159.344359854198, 00:20:58.129 "mibps": 59.21618890568046, 00:20:58.129 "io_failed": 0, 00:20:58.129 "io_timeout": 0, 00:20:58.129 "avg_latency_us": 8435.397592843197, 00:20:58.129 "min_latency_us": 6315.2872727272725, 00:20:58.129 "max_latency_us": 37653.41090909091 00:20:58.129 } 00:20:58.129 ], 00:20:58.129 "core_count": 1 00:20:58.129 } 00:20:58.129 17:21:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:58.129 17:21:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:58.129 17:21:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:58.129 17:21:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:58.129 | .driver_specific 00:20:58.129 | .nvme_error 00:20:58.129 | .status_code 00:20:58.129 | .command_transient_transport_error' 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80345 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80345 ']' 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80345 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80345 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:58.389 killing process with pid 80345 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80345' 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80345 00:20:58.389 Received shutdown signal, test time was about 2.000000 seconds 00:20:58.389 00:20:58.389 Latency(us) 00:20:58.389 [2024-11-04T17:21:59.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.389 [2024-11-04T17:21:59.193Z] =================================================================================================================== 00:20:58.389 [2024-11-04T17:21:59.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.389 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80345 00:20:58.654 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:58.654 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:58.654 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:58.654 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:58.654 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80398 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80398 /var/tmp/bperf.sock 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80398 ']' 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:58.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:58.655 17:21:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:58.655 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:58.655 Zero copy mechanism will not be used. 00:20:58.655 [2024-11-04 17:21:59.370188] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:20:58.655 [2024-11-04 17:21:59.370283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80398 ] 00:20:58.914 [2024-11-04 17:21:59.510908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.914 [2024-11-04 17:21:59.568811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.914 [2024-11-04 17:21:59.626908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:59.850 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.418 nvme0n1 00:21:00.418 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:00.418 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.418 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:00.418 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.418 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:00.418 17:22:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:00.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.418 Zero copy mechanism will not be used. 00:21:00.418 Running I/O for 2 seconds... 00:21:00.418 [2024-11-04 17:22:01.087024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.087401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.087438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.092480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.092778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.092825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.097774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.098131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.098169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.102895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.103241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.103316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.108141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.108508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.108548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.113064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.113400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.113433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.118136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.118485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.118530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.123559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.418 [2024-11-04 17:22:01.123929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.418 [2024-11-04 17:22:01.123969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.418 [2024-11-04 17:22:01.128873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.129222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.129268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.134078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.134389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.134429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.139350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.139649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.139691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.144754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.145125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.145165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.150315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.150703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.150747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.156026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.156406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.156445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.161357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.161714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.161754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.166831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.167170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.167241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.171916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.172271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.172316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.176941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.177290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.177350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.182155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.182519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.182560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.187502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.187858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.187908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.192952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.193256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.193302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.198580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.198915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.198956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.204033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.204396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.204438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.209653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.209967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.210001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.419 [2024-11-04 17:22:01.215094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.419 [2024-11-04 17:22:01.215439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.419 [2024-11-04 17:22:01.215484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.220642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.220938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.220972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.226023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.226361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.226419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.231400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.231753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.231792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.236456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.236821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.236860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.241721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.242056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.242091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.246967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.247313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.247368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.252385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.252730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.252769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.257670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.258001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.258037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.263005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.263366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.263404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.268123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.268471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.268510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.273193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.273537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.273576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.278374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.678 [2024-11-04 17:22:01.278716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.678 [2024-11-04 17:22:01.278763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.678 [2024-11-04 17:22:01.283749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.284123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.284162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.288974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.289351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.289391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.294215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.294619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.294655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.299087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.299457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.299497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.304698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.305006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.305041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.309976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.310291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.310330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.315338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.315691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.315733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.320664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.320993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.321039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.325798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.326146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.326183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.330885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.331248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.331318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.336023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.336423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.336462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.341458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.341793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.341838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.346678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.347024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.347077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.351663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.351999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.352039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.356979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.357381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.357419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.362366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.362705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.362744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.367612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.367922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.367957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.372828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.373181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.373235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.378107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.378427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.378461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.383239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.383586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.383644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.388246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.388571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.388615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.393377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.393710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.393756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.398419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.398751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.398784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.403981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.404351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.404393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.409285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.409669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.409708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.414698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.414995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.415029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.420063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.679 [2024-11-04 17:22:01.420436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.679 [2024-11-04 17:22:01.420478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.679 [2024-11-04 17:22:01.425146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.425502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.425539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.430291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.430649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.430697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.435298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.435652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.435691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.440300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.440615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.440671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.445282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.445607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.445664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.450451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.450803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.450842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.455297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.455651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.455689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.460043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.460374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.460408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.464797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.465123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.465169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.470079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.470458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.470497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.680 [2024-11-04 17:22:01.475671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.680 [2024-11-04 17:22:01.475968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.680 [2024-11-04 17:22:01.476005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.481330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.481712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.481754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.486894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.487289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.487342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.492435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.492806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.492845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.497805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.498167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.498219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.502886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.503252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.503321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.508054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.508450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.508488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.513078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.513419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.513455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.518100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.518462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.518506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.523519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.523880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.523920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.528855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.529253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.529306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.534313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.534698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.534738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.539692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.540076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.540148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.545065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.545434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.545489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.550373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.550727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.550787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.555588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.555931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.555964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.560650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.560977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.561021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.565615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.565950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.565983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.570894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.571292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.571342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.576676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.577020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.577065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.582162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.940 [2024-11-04 17:22:01.582578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.940 [2024-11-04 17:22:01.582615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.940 [2024-11-04 17:22:01.587740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.588083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.588140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.592991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.593375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.593400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.598728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.599127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.599169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.604516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.604867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.604907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.609847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.610220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.610269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.614940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.615317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.615362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.620381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.620758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.620797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.625750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.626100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.626141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.631168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.631546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.631584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.636693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.637043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.637081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.642337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.642698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.642732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.647536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.647868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.647903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.652616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.652947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.652986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.657840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.658216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.658260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.663689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.664004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.664043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.669159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.669514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.669553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.675084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.675447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.675487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.680686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.680998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.681041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.686495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.686850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.691676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.692005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.692047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.696943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.697277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.697322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.702230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.702540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.702580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.707413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.707765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.707799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.712564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.712906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.712945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.717837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.718223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.718255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.723587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.723913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.723963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.729183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.941 [2024-11-04 17:22:01.729562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.941 [2024-11-04 17:22:01.729599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:00.941 [2024-11-04 17:22:01.734607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.942 [2024-11-04 17:22:01.734963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.942 [2024-11-04 17:22:01.735000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:00.942 [2024-11-04 17:22:01.740399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:00.942 [2024-11-04 17:22:01.740773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.942 [2024-11-04 17:22:01.740813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.745948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.746261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.746295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.751153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.751524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.751561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.755948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.756294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.756361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.760813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.761129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.761162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.765698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.766045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.766079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.770718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.771105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.771143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.776052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.776397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.776431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.781162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.781511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.781550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.786307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.786669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.786708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.791326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.791658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.791696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.796393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.796757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.796796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.801589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.801964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.802000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.806744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.807092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.807149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.811801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.812143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.812181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.816972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.817380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.817431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.202 [2024-11-04 17:22:01.822553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.202 [2024-11-04 17:22:01.822889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.202 [2024-11-04 17:22:01.822932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.828041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.828407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.828446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.833266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.833678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.833717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.838710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.839043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.839080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.843739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.844059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.844122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.848868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.849206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.849256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.854070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.854381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.854421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.859489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.859882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.859920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.864716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.865045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.865077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.869808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.870134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.870167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.875460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.875791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.875829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.880781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.881175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.881227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.886053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.886404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.886443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.891396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.891712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.891759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.896410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.896759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.896804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.901362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.901708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.901746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.906326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.906681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.906724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.911376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.911742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.911780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.916651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.916976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.917017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.922052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.922415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.922454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.927463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.927830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.927869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.932976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.933289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.933325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.938233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.938610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.938650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.943518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.943858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.943897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.948745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.949113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.949153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.953779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.954125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.954169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.958773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.959154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.203 [2024-11-04 17:22:01.959193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.203 [2024-11-04 17:22:01.963845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.203 [2024-11-04 17:22:01.964207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.964258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.204 [2024-11-04 17:22:01.968817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.204 [2024-11-04 17:22:01.969159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.969198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.204 [2024-11-04 17:22:01.973781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.204 [2024-11-04 17:22:01.974125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.974161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.204 [2024-11-04 17:22:01.978737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.204 [2024-11-04 17:22:01.979033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.979070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.204 [2024-11-04 17:22:01.984027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.204 [2024-11-04 17:22:01.984397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.984436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.204 [2024-11-04 17:22:01.989180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.204 [2024-11-04 17:22:01.989546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.989585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.204 [2024-11-04 17:22:01.994420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.204 [2024-11-04 17:22:01.994764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.994803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.204 [2024-11-04 17:22:01.999642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.204 [2024-11-04 17:22:01.999939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.204 [2024-11-04 17:22:01.999973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.464 [2024-11-04 17:22:02.005142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.005503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.005541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.010652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.010978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.011018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.015758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.016143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.016184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.020908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.021269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.021329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.026128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.026481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.026519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.031200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.031542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.031577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.036076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.036414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.036446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.040974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.041340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.041373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.045993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.046303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.046365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.050897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.051221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.051262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.056056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.056428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.056468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.061342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.061698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.061737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.066577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.066915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.066954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.071823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.072207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.072255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.076901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.077258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.077317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.465 5876.00 IOPS, 734.50 MiB/s [2024-11-04T17:22:02.269Z] [2024-11-04 17:22:02.083257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.083612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.083651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.088362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.088714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.088747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.093346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.093664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.093702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.098313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.098700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.098739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.103609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.103939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.103973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.108860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.109207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.109254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.113970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.114277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.114313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.119221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.119574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.119613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.124302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.124646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.124686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.129194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.129550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.129589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.134036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.134376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.134414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-11-04 17:22:02.138937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.465 [2024-11-04 17:22:02.139278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.465 [2024-11-04 17:22:02.139332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.143945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.144302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.144359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.149252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.149588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.149633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.154559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.154915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.154959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.159871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.160170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.160217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.165229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.165638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.165677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.170797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.171177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.171230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.176060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.176405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.176440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.181477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.181833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.181867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.186782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.187132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.187171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.191928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.192292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.192340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.197321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.197683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.197723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.202464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.202813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.202851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.207664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.207987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.208045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.212774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.213120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.213158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.217952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.218265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.218299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.223238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.223577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.223616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.228697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.228995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.229029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.233763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.234067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.234100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.239145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.239517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.239555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.244384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.244726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.244771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.249523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.249874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.249921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.254534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.254902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.254941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.259535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.259894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.259932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.466 [2024-11-04 17:22:02.264895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.466 [2024-11-04 17:22:02.265216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-11-04 17:22:02.265264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.269992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.270314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.270352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.275013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.275366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.275404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.280053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.280413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.280451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.285081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.285445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.285483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.290510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.290880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.290918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.295702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.296081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.296119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.300921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.301262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.301304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.306206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.306549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.306587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.311109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.311459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.311497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.316431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.316774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.316812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.328083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.328601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.328640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.338142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.338487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.338530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.345955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.346289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.346333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.353475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.353835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.361091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.361441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.361485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.368492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.368860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.368899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.375994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.376366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.376404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.383594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.383940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.383980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.391014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.391386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.391424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.398498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.398840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.398885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.406127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.406505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.406548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.413696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.414051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.414118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.421406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.421741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.421779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.727 [2024-11-04 17:22:02.429008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.727 [2024-11-04 17:22:02.429364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.727 [2024-11-04 17:22:02.429403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.436427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.436780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.436826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.443939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.444302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.444352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.451458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.451778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.451824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.458943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.459306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.459340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.466262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.466605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.466638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.472038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.472367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.472412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.477719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.478052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.478087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.483520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.483828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.483861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.489339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.489689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.489728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.495047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.495389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.495423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.500800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.501127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.501156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.506756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.507078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.507105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.512502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.512823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.518275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.518556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.518583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.728 [2024-11-04 17:22:02.524097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.728 [2024-11-04 17:22:02.525461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.728 [2024-11-04 17:22:02.525501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.988 [2024-11-04 17:22:02.531072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.988 [2024-11-04 17:22:02.531385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-11-04 17:22:02.531413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.988 [2024-11-04 17:22:02.537050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.988 [2024-11-04 17:22:02.537516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-11-04 17:22:02.537549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.988 [2024-11-04 17:22:02.543178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.988 [2024-11-04 17:22:02.543516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-11-04 17:22:02.543549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.988 [2024-11-04 17:22:02.548976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.988 [2024-11-04 17:22:02.549491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.988 [2024-11-04 17:22:02.549531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.988 [2024-11-04 17:22:02.555362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.988 [2024-11-04 17:22:02.555675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.555704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.561296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.561590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.561649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.567250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.567590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.567619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.573057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.573506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.573537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.579264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.579543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.579571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.585162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.585637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.585669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.591366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.591672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.591700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.597100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.597542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.597574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.603323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.603799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.604078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.609514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.610046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.610296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.615791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.616327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.622454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.622954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.623173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.628798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.629612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.635460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.635945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.636171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.641963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.642508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.642827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.648412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.648691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.648720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.654331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.654616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.654659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.660094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.660643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.660678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.666138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.666483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.666510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.672552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.672887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.672916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.678834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.679174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.679202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.685045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.685365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.685397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.691196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.691515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.691553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.697489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.697844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.697882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.703571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.703885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.703913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.709703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.710044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.710084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.715652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.715958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.715985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.989 [2024-11-04 17:22:02.721785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.989 [2024-11-04 17:22:02.722114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.989 [2024-11-04 17:22:02.722143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.727883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.728490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.728523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.734517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.734868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.734898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.740669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.740999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.741026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.746650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.746925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.746952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.752445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.752730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.752761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.758244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.758534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.758569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.764077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.764570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.764603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.770443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.770782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.770812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.776440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.776773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.776801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.782504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.782820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.782848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.990 [2024-11-04 17:22:02.788752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:01.990 [2024-11-04 17:22:02.789060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.990 [2024-11-04 17:22:02.789087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.794714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.795039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.795084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.800639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.800920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.800947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.806486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.806760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.806787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.812165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.812771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.812817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.818564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.818901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.818931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.824765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.825101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.825128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.830955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.831273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.831309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.837108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.837451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.837484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.843335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.843609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.843668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.849460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.849755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.849777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.855332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.855606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.855627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.861093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.861458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.861507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.867064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.867702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.873627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.873950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.250 [2024-11-04 17:22:02.873979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.250 [2024-11-04 17:22:02.879649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.250 [2024-11-04 17:22:02.879947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.879974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.885656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.885987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.886015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.891674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.891965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.892023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.897790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.898150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.898193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.903830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.904121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.904148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.909703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.910023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.910051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.915629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.915934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.915986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.921619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.921967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.921997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.927571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.927876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.927905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.933426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.933743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.933771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.939259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.939535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.939557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.944832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.945120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.945147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.950636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.950916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.950944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.956161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.956490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.956522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.961937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.962292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.962319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.967871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.968193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.968228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.973706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.974020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.974060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.979575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.979889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.979918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.985504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.985825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.985854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.991335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.991608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.991650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:02.997140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:02.997515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:02.997565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:03.003109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:03.003757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:03.003791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:03.009374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:03.009690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:03.009721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:03.015159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:03.015636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:03.015686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:03.021049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:03.021355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:03.021417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:03.026801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:03.027252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:03.027304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:03.032717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:03.032990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:03.033017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.251 [2024-11-04 17:22:03.038295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.251 [2024-11-04 17:22:03.038581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.251 [2024-11-04 17:22:03.038607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.252 [2024-11-04 17:22:03.043818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.252 [2024-11-04 17:22:03.044106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-04 17:22:03.044134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.252 [2024-11-04 17:22:03.049872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.252 [2024-11-04 17:22:03.050404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.252 [2024-11-04 17:22:03.050446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.535 [2024-11-04 17:22:03.056224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.535 [2024-11-04 17:22:03.056635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.535 [2024-11-04 17:22:03.056680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.535 [2024-11-04 17:22:03.062344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.535 [2024-11-04 17:22:03.062605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.535 [2024-11-04 17:22:03.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.535 [2024-11-04 17:22:03.067968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.535 [2024-11-04 17:22:03.068074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.535 [2024-11-04 17:22:03.068096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.535 [2024-11-04 17:22:03.073813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.535 [2024-11-04 17:22:03.074201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.535 [2024-11-04 17:22:03.074250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.535 [2024-11-04 17:22:03.080022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x727a90) with pdu=0x2000166fef90 00:21:02.535 [2024-11-04 17:22:03.080106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.535 [2024-11-04 17:22:03.080127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.535 5518.50 IOPS, 689.81 MiB/s 00:21:02.535 Latency(us) 00:21:02.535 [2024-11-04T17:22:03.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.535 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:02.535 nvme0n1 : 2.00 5514.83 689.35 0.00 0.00 2894.89 2115.03 12392.26 00:21:02.535 [2024-11-04T17:22:03.339Z] =================================================================================================================== 00:21:02.535 [2024-11-04T17:22:03.339Z] Total : 5514.83 689.35 0.00 0.00 2894.89 2115.03 12392.26 00:21:02.535 { 00:21:02.535 "results": [ 00:21:02.535 { 00:21:02.535 "job": "nvme0n1", 00:21:02.535 "core_mask": "0x2", 00:21:02.535 "workload": "randwrite", 00:21:02.535 "status": "finished", 00:21:02.535 "queue_depth": 16, 00:21:02.535 "io_size": 131072, 00:21:02.535 "runtime": 2.004233, 00:21:02.535 "iops": 5514.827866819875, 00:21:02.535 "mibps": 689.3534833524844, 00:21:02.535 "io_failed": 0, 00:21:02.535 "io_timeout": 0, 00:21:02.535 "avg_latency_us": 2894.8889236159657, 00:21:02.535 "min_latency_us": 2115.0254545454545, 00:21:02.535 "max_latency_us": 12392.261818181818 00:21:02.535 } 00:21:02.535 ], 00:21:02.535 "core_count": 1 00:21:02.535 } 00:21:02.535 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:02.535 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:02.535 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:02.535 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:02.535 | .driver_specific 00:21:02.535 | .nvme_error 00:21:02.535 | .status_code 00:21:02.535 | .command_transient_transport_error' 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 356 > 0 )) 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80398 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80398 ']' 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80398 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80398 00:21:02.805 killing process with pid 80398 00:21:02.805 Received shutdown signal, test time was about 2.000000 seconds 00:21:02.805 00:21:02.805 Latency(us) 00:21:02.805 [2024-11-04T17:22:03.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.805 [2024-11-04T17:22:03.609Z] =================================================================================================================== 00:21:02.805 [2024-11-04T17:22:03.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80398' 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80398 00:21:02.805 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80398 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80221 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80221 ']' 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80221 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80221 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:03.064 killing process with pid 80221 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80221' 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80221 00:21:03.064 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80221 00:21:03.324 00:21:03.324 real 0m16.369s 00:21:03.324 user 0m31.827s 00:21:03.324 sys 0m4.894s 00:21:03.324 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.324 ************************************ 00:21:03.324 END TEST nvmf_digest_error 00:21:03.324 ************************************ 00:21:03.324 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:03.324 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:03.324 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:03.324 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.324 17:22:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.324 rmmod nvme_tcp 00:21:03.324 rmmod nvme_fabrics 00:21:03.324 rmmod nvme_keyring 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.324 Process with pid 80221 is not found 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80221 ']' 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80221 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 80221 ']' 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 80221 00:21:03.324 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (80221) - No such process 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 80221 is not found' 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:03.324 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:03.583 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:03.584 ************************************ 00:21:03.584 END TEST nvmf_digest 00:21:03.584 ************************************ 00:21:03.584 00:21:03.584 real 0m34.919s 00:21:03.584 user 1m6.061s 00:21:03.584 sys 0m9.970s 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.584 17:22:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.844 ************************************ 00:21:03.844 START TEST nvmf_host_multipath 00:21:03.844 ************************************ 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:03.844 * Looking for test storage... 00:21:03.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.844 --rc genhtml_branch_coverage=1 00:21:03.844 --rc genhtml_function_coverage=1 00:21:03.844 --rc genhtml_legend=1 00:21:03.844 --rc geninfo_all_blocks=1 00:21:03.844 --rc geninfo_unexecuted_blocks=1 00:21:03.844 00:21:03.844 ' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.844 --rc genhtml_branch_coverage=1 00:21:03.844 --rc genhtml_function_coverage=1 00:21:03.844 --rc genhtml_legend=1 00:21:03.844 --rc geninfo_all_blocks=1 00:21:03.844 --rc geninfo_unexecuted_blocks=1 00:21:03.844 00:21:03.844 ' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.844 --rc genhtml_branch_coverage=1 00:21:03.844 --rc genhtml_function_coverage=1 00:21:03.844 --rc genhtml_legend=1 00:21:03.844 --rc geninfo_all_blocks=1 00:21:03.844 --rc geninfo_unexecuted_blocks=1 00:21:03.844 00:21:03.844 ' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:03.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.844 --rc genhtml_branch_coverage=1 00:21:03.844 --rc genhtml_function_coverage=1 00:21:03.844 --rc genhtml_legend=1 00:21:03.844 --rc geninfo_all_blocks=1 00:21:03.844 --rc geninfo_unexecuted_blocks=1 00:21:03.844 00:21:03.844 ' 00:21:03.844 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:03.845 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.845 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:04.104 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:04.104 Cannot find device "nvmf_init_br" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:04.105 Cannot find device "nvmf_init_br2" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:04.105 Cannot find device "nvmf_tgt_br" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.105 Cannot find device "nvmf_tgt_br2" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:04.105 Cannot find device "nvmf_init_br" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:04.105 Cannot find device "nvmf_init_br2" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:04.105 Cannot find device "nvmf_tgt_br" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:04.105 Cannot find device "nvmf_tgt_br2" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:04.105 Cannot find device "nvmf_br" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:04.105 Cannot find device "nvmf_init_if" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:04.105 Cannot find device "nvmf_init_if2" 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:04.105 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:04.364 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:04.364 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:04.364 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:04.364 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:04.364 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:04.365 17:22:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:04.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:04.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:21:04.365 00:21:04.365 --- 10.0.0.3 ping statistics --- 00:21:04.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.365 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:04.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:04.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:21:04.365 00:21:04.365 --- 10.0.0.4 ping statistics --- 00:21:04.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.365 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:04.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:04.365 00:21:04.365 --- 10.0.0.1 ping statistics --- 00:21:04.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.365 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:04.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:04.365 00:21:04.365 --- 10.0.0.2 ping statistics --- 00:21:04.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.365 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80719 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80719 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80719 ']' 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:04.365 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:04.365 [2024-11-04 17:22:05.116555] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:21:04.365 [2024-11-04 17:22:05.116654] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.624 [2024-11-04 17:22:05.263940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:04.624 [2024-11-04 17:22:05.327376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.624 [2024-11-04 17:22:05.327667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.624 [2024-11-04 17:22:05.327703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.624 [2024-11-04 17:22:05.327712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.624 [2024-11-04 17:22:05.327720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.624 [2024-11-04 17:22:05.329042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.624 [2024-11-04 17:22:05.329051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.624 [2024-11-04 17:22:05.383373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80719 00:21:04.884 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:05.142 [2024-11-04 17:22:05.781892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.142 17:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:05.400 Malloc0 00:21:05.400 17:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:05.658 17:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.916 17:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:06.175 [2024-11-04 17:22:06.919384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:06.175 17:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:06.433 [2024-11-04 17:22:07.175576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:06.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.433 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80766 00:21:06.433 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.433 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:06.433 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80766 /var/tmp/bdevperf.sock 00:21:06.433 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80766 ']' 00:21:06.433 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.434 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:06.434 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.434 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:06.434 17:22:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:07.810 17:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.810 17:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:21:07.810 17:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:07.810 17:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:08.069 Nvme0n1 00:21:08.069 17:22:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:08.636 Nvme0n1 00:21:08.636 17:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.636 17:22:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:09.575 17:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:09.575 17:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:09.834 17:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:10.093 17:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:10.093 17:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80807 00:21:10.093 17:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80719 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:10.093 17:22:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:16.658 17:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:16.658 17:22:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:16.658 Attaching 4 probes... 00:21:16.658 @path[10.0.0.3, 4421]: 18809 00:21:16.658 @path[10.0.0.3, 4421]: 18707 00:21:16.658 @path[10.0.0.3, 4421]: 18771 00:21:16.658 @path[10.0.0.3, 4421]: 18528 00:21:16.658 @path[10.0.0.3, 4421]: 17752 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80807 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:16.658 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:16.917 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:16.917 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80926 00:21:16.917 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80719 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:16.917 17:22:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:23.484 Attaching 4 probes... 00:21:23.484 @path[10.0.0.3, 4420]: 17447 00:21:23.484 @path[10.0.0.3, 4420]: 18294 00:21:23.484 @path[10.0.0.3, 4420]: 17694 00:21:23.484 @path[10.0.0.3, 4420]: 18299 00:21:23.484 @path[10.0.0.3, 4420]: 17869 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:23.484 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80926 00:21:23.485 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:23.485 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:23.485 17:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:23.485 17:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:23.745 17:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:23.745 17:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81044 00:21:23.745 17:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80719 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:23.745 17:22:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:30.332 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:30.333 Attaching 4 probes... 00:21:30.333 @path[10.0.0.3, 4421]: 15006 00:21:30.333 @path[10.0.0.3, 4421]: 17904 00:21:30.333 @path[10.0.0.3, 4421]: 17573 00:21:30.333 @path[10.0.0.3, 4421]: 17576 00:21:30.333 @path[10.0.0.3, 4421]: 17584 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81044 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:30.333 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:30.333 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:30.591 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:30.591 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81156 00:21:30.591 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80719 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:30.591 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:37.158 Attaching 4 probes... 00:21:37.158 00:21:37.158 00:21:37.158 00:21:37.158 00:21:37.158 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81156 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:37.158 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:37.417 17:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:37.417 17:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:37.417 17:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81269 00:21:37.417 17:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80719 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:37.417 17:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.984 Attaching 4 probes... 00:21:43.984 @path[10.0.0.3, 4421]: 16850 00:21:43.984 @path[10.0.0.3, 4421]: 17328 00:21:43.984 @path[10.0.0.3, 4421]: 16840 00:21:43.984 @path[10.0.0.3, 4421]: 17356 00:21:43.984 @path[10.0.0.3, 4421]: 18840 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81269 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:43.984 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:45.361 17:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:45.361 17:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81397 00:21:45.361 17:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:45.361 17:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80719 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:51.927 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:51.927 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:51.927 Attaching 4 probes... 00:21:51.927 @path[10.0.0.3, 4420]: 17604 00:21:51.927 @path[10.0.0.3, 4420]: 18762 00:21:51.927 @path[10.0.0.3, 4420]: 18399 00:21:51.927 @path[10.0.0.3, 4420]: 18871 00:21:51.927 @path[10.0.0.3, 4420]: 17794 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81397 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:51.927 [2024-11-04 17:22:52.302379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:51.927 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:58.493 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:58.493 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81567 00:21:58.493 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80719 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:58.493 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:05.083 Attaching 4 probes... 00:22:05.083 @path[10.0.0.3, 4421]: 16673 00:22:05.083 @path[10.0.0.3, 4421]: 16627 00:22:05.083 @path[10.0.0.3, 4421]: 16826 00:22:05.083 @path[10.0.0.3, 4421]: 16933 00:22:05.083 @path[10.0.0.3, 4421]: 16965 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81567 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80766 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80766 ']' 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80766 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80766 00:22:05.083 killing process with pid 80766 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80766' 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80766 00:22:05.083 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80766 00:22:05.083 { 00:22:05.083 "results": [ 00:22:05.083 { 00:22:05.083 "job": "Nvme0n1", 00:22:05.083 "core_mask": "0x4", 00:22:05.083 "workload": "verify", 00:22:05.083 "status": "terminated", 00:22:05.083 "verify_range": { 00:22:05.083 "start": 0, 00:22:05.083 "length": 16384 00:22:05.083 }, 00:22:05.083 "queue_depth": 128, 00:22:05.083 "io_size": 4096, 00:22:05.083 "runtime": 55.692218, 00:22:05.083 "iops": 7597.416931751578, 00:22:05.083 "mibps": 29.6774098896546, 00:22:05.083 "io_failed": 0, 00:22:05.083 "io_timeout": 0, 00:22:05.083 "avg_latency_us": 16823.026600980986, 00:22:05.083 "min_latency_us": 467.31636363636363, 00:22:05.083 "max_latency_us": 7046430.72 00:22:05.083 } 00:22:05.083 ], 00:22:05.083 "core_count": 1 00:22:05.083 } 00:22:05.083 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80766 00:22:05.083 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:05.083 [2024-11-04 17:22:07.254733] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:22:05.083 [2024-11-04 17:22:07.254844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80766 ] 00:22:05.083 [2024-11-04 17:22:07.402213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.083 [2024-11-04 17:22:07.465797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.083 [2024-11-04 17:22:07.519664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:05.083 Running I/O for 90 seconds... 00:22:05.083 7061.00 IOPS, 27.58 MiB/s [2024-11-04T17:23:05.887Z] 8113.00 IOPS, 31.69 MiB/s [2024-11-04T17:23:05.887Z] 8608.67 IOPS, 33.63 MiB/s [2024-11-04T17:23:05.887Z] 8796.50 IOPS, 34.36 MiB/s [2024-11-04T17:23:05.887Z] 8912.40 IOPS, 34.81 MiB/s [2024-11-04T17:23:05.887Z] 8975.00 IOPS, 35.06 MiB/s [2024-11-04T17:23:05.887Z] 8959.14 IOPS, 35.00 MiB/s [2024-11-04T17:23:05.887Z] 8930.25 IOPS, 34.88 MiB/s [2024-11-04T17:23:05.887Z] [2024-11-04 17:22:17.585112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.083 [2024-11-04 17:22:17.585185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:05.083 [2024-11-04 17:22:17.585265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.083 [2024-11-04 17:22:17.585305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:05.083 [2024-11-04 17:22:17.585330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.585346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.585383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.585420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.585457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.585494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.585545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.585970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.585989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.586004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.586024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.586038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.586058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.586072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.586091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.586117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.586139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.586154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.586176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.084 [2024-11-04 17:22:17.586192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.588975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.588989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.589008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.589038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.589056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.084 [2024-11-04 17:22:17.589069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:05.084 [2024-11-04 17:22:17.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.589101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.589132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.589163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.589227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.589262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.589318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.589355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.589666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.589680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.085 [2024-11-04 17:22:17.592720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.592755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.592789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.592824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.592858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.592906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.592940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.592959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.592979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.085 [2024-11-04 17:22:17.593342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:05.085 [2024-11-04 17:22:17.593363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.593398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.593432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.593475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.593510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.593559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.593606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.593639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.593653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.086 [2024-11-04 17:22:17.596728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.596762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.596796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.596830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.596870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.596908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.596943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.596978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.596997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.086 [2024-11-04 17:22:17.597260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:05.086 [2024-11-04 17:22:17.597322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:17.597347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:17.597663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:17.597678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:05.087 8904.22 IOPS, 34.78 MiB/s [2024-11-04T17:23:05.891Z] 8914.10 IOPS, 34.82 MiB/s [2024-11-04T17:23:05.891Z] 8927.36 IOPS, 34.87 MiB/s [2024-11-04T17:23:05.891Z] 8926.08 IOPS, 34.87 MiB/s [2024-11-04T17:23:05.891Z] 8935.46 IOPS, 34.90 MiB/s [2024-11-04T17:23:05.891Z] 8938.93 IOPS, 34.92 MiB/s [2024-11-04T17:23:05.891Z] [2024-11-04 17:22:24.163357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.163786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.163823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.163860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.163897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.163935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.163970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.163999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.164049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.164082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.087 [2024-11-04 17:22:24.164156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.164500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.164538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.164572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.164607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.164659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.164697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.087 [2024-11-04 17:22:24.164737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:05.087 [2024-11-04 17:22:24.164760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.164776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.164798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.164813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.164835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.164851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.164873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.164889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.164911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.164936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.164960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.164976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.165414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.165973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.165990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.166028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.166078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.088 [2024-11-04 17:22:24.166115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.166153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.166193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.166260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.088 [2024-11-04 17:22:24.166312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:05.088 [2024-11-04 17:22:24.166332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.166974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.166994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.089 [2024-11-04 17:22:24.167745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.089 [2024-11-04 17:22:24.167938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:05.089 [2024-11-04 17:22:24.167974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:24.167989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.168011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:24.168027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.168809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:24.168838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.168872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.168890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.168919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.168935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.168971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.168987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:24.169759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:24.169776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.090 8858.73 IOPS, 34.60 MiB/s [2024-11-04T17:23:05.894Z] 8381.75 IOPS, 32.74 MiB/s [2024-11-04T17:23:05.894Z] 8413.88 IOPS, 32.87 MiB/s [2024-11-04T17:23:05.894Z] 8432.67 IOPS, 32.94 MiB/s [2024-11-04T17:23:05.894Z] 8454.53 IOPS, 33.03 MiB/s [2024-11-04T17:23:05.894Z] 8467.80 IOPS, 33.08 MiB/s [2024-11-04T17:23:05.894Z] 8489.71 IOPS, 33.16 MiB/s [2024-11-04T17:23:05.894Z] 8516.18 IOPS, 33.27 MiB/s [2024-11-04T17:23:05.894Z] [2024-11-04 17:22:31.338547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.339130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.339337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.339488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.339602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.339712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.339827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.339904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.340017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.340104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.340186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.340280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.340401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.340492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.340577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.090 [2024-11-04 17:22:31.340669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.340752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.340855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.340969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.341049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.341141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.341275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.341390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.341471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.341563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.341657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.341741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.341818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.341904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.342021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.342121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.090 [2024-11-04 17:22:31.342208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:05.090 [2024-11-04 17:22:31.342357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.342460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.342567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.342665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.342742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.342820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.342912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.342995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.343077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.343153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.343269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.343380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.343477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.343555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.343655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.343773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.343885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.343983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.344179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.344388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.344572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.344708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.344745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.344778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.344812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.344853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.344887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.344920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.344953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.344972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.344986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.091 [2024-11-04 17:22:31.345459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:05.091 [2024-11-04 17:22:31.345743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.091 [2024-11-04 17:22:31.345758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.345778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.345792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.345812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.345826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.345846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.345861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.345881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.345895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.345917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.345981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.345998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.092 [2024-11-04 17:22:31.346762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.346969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.346983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.347003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.347018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.347038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.347052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:05.092 [2024-11-04 17:22:31.347072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.092 [2024-11-04 17:22:31.347086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.347141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.347175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.347209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.347269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.347303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.347349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.347388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.347980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.347994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.348015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.348029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.348050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.348064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.348092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.348108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.348956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.348982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.349030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.349071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.349111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.093 [2024-11-04 17:22:31.349151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:05.093 [2024-11-04 17:22:31.349557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.093 [2024-11-04 17:22:31.349577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:05.093 8159.48 IOPS, 31.87 MiB/s [2024-11-04T17:23:05.897Z] 7819.50 IOPS, 30.54 MiB/s [2024-11-04T17:23:05.898Z] 7506.72 IOPS, 29.32 MiB/s [2024-11-04T17:23:05.898Z] 7218.00 IOPS, 28.20 MiB/s [2024-11-04T17:23:05.898Z] 6950.67 IOPS, 27.15 MiB/s [2024-11-04T17:23:05.898Z] 6702.43 IOPS, 26.18 MiB/s [2024-11-04T17:23:05.898Z] 6471.31 IOPS, 25.28 MiB/s [2024-11-04T17:23:05.898Z] 6532.70 IOPS, 25.52 MiB/s [2024-11-04T17:23:05.898Z] 6592.55 IOPS, 25.75 MiB/s [2024-11-04T17:23:05.898Z] 6659.41 IOPS, 26.01 MiB/s [2024-11-04T17:23:05.898Z] 6711.91 IOPS, 26.22 MiB/s [2024-11-04T17:23:05.898Z] 6787.68 IOPS, 26.51 MiB/s [2024-11-04T17:23:05.898Z] 6867.11 IOPS, 26.82 MiB/s [2024-11-04T17:23:05.898Z] [2024-11-04 17:22:44.730283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.094 [2024-11-04 17:22:44.730713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.730965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.730987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.094 [2024-11-04 17:22:44.731339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.094 [2024-11-04 17:22:44.731351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.731765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.095 [2024-11-04 17:22:44.731973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.731987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.095 [2024-11-04 17:22:44.732387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.095 [2024-11-04 17:22:44.732399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.732601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.732971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.732989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.733016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.096 [2024-11-04 17:22:44.733056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.096 [2024-11-04 17:22:44.733320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.096 [2024-11-04 17:22:44.733340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.097 [2024-11-04 17:22:44.733582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.097 [2024-11-04 17:22:44.733607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.097 [2024-11-04 17:22:44.733632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.097 [2024-11-04 17:22:44.733666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.097 [2024-11-04 17:22:44.733693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.097 [2024-11-04 17:22:44.733718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.097 [2024-11-04 17:22:44.733743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.097 [2024-11-04 17:22:44.733768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.733817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:05.097 [2024-11-04 17:22:44.733831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:05.097 [2024-11-04 17:22:44.733841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118944 len:8 PRP1 0x0 PRP2 0x0 00:22:05.097 [2024-11-04 17:22:44.733854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.734051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.097 [2024-11-04 17:22:44.734079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.734094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.097 [2024-11-04 17:22:44.734107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.734120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.097 [2024-11-04 17:22:44.734133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.734146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.097 [2024-11-04 17:22:44.734158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.097 [2024-11-04 17:22:44.734171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24bbe50 is same with the state(6) to be set 00:22:05.097 [2024-11-04 17:22:44.735197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:05.097 [2024-11-04 17:22:44.735259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bbe50 (9): Bad file descriptor 00:22:05.097 [2024-11-04 17:22:44.735618] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.097 [2024-11-04 17:22:44.735650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24bbe50 with addr=10.0.0.3, port=4421 00:22:05.097 [2024-11-04 17:22:44.735667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24bbe50 is same with the state(6) to be set 00:22:05.097 [2024-11-04 17:22:44.735725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bbe50 (9): Bad file descriptor 00:22:05.097 [2024-11-04 17:22:44.735778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:05.097 [2024-11-04 17:22:44.735826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:05.097 [2024-11-04 17:22:44.735842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:05.097 [2024-11-04 17:22:44.735857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:05.097 [2024-11-04 17:22:44.735872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:05.097 6934.00 IOPS, 27.09 MiB/s [2024-11-04T17:23:05.901Z] 6983.78 IOPS, 27.28 MiB/s [2024-11-04T17:23:05.901Z] 7033.89 IOPS, 27.48 MiB/s [2024-11-04T17:23:05.901Z] 7093.95 IOPS, 27.71 MiB/s [2024-11-04T17:23:05.901Z] 7146.80 IOPS, 27.92 MiB/s [2024-11-04T17:23:05.901Z] 7202.73 IOPS, 28.14 MiB/s [2024-11-04T17:23:05.901Z] 7243.43 IOPS, 28.29 MiB/s [2024-11-04T17:23:05.901Z] 7290.42 IOPS, 28.48 MiB/s [2024-11-04T17:23:05.901Z] 7327.27 IOPS, 28.62 MiB/s [2024-11-04T17:23:05.901Z] 7366.76 IOPS, 28.78 MiB/s [2024-11-04T17:23:05.901Z] [2024-11-04 17:22:54.807503] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:05.097 7408.98 IOPS, 28.94 MiB/s [2024-11-04T17:23:05.901Z] 7446.40 IOPS, 29.09 MiB/s [2024-11-04T17:23:05.901Z] 7460.10 IOPS, 29.14 MiB/s [2024-11-04T17:23:05.901Z] 7482.69 IOPS, 29.23 MiB/s [2024-11-04T17:23:05.901Z] 7504.74 IOPS, 29.32 MiB/s [2024-11-04T17:23:05.901Z] 7523.39 IOPS, 29.39 MiB/s [2024-11-04T17:23:05.901Z] 7537.79 IOPS, 29.44 MiB/s [2024-11-04T17:23:05.901Z] 7554.96 IOPS, 29.51 MiB/s [2024-11-04T17:23:05.901Z] 7571.50 IOPS, 29.58 MiB/s [2024-11-04T17:23:05.901Z] 7587.15 IOPS, 29.64 MiB/s [2024-11-04T17:23:05.901Z] Received shutdown signal, test time was about 55.693090 seconds 00:22:05.097 00:22:05.097 Latency(us) 00:22:05.097 [2024-11-04T17:23:05.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.097 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.097 Verification LBA range: start 0x0 length 0x4000 00:22:05.097 Nvme0n1 : 55.69 7597.42 29.68 0.00 0.00 16823.03 467.32 7046430.72 00:22:05.097 [2024-11-04T17:23:05.901Z] =================================================================================================================== 00:22:05.097 [2024-11-04T17:23:05.901Z] Total : 7597.42 29.68 0.00 0.00 16823.03 467.32 7046430.72 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.097 rmmod nvme_tcp 00:22:05.097 rmmod nvme_fabrics 00:22:05.097 rmmod nvme_keyring 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:05.097 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80719 ']' 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80719 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80719 ']' 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80719 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80719 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80719' 00:22:05.098 killing process with pid 80719 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80719 00:22:05.098 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80719 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:05.357 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:05.357 00:22:05.357 real 1m1.719s 00:22:05.357 user 2m51.826s 00:22:05.357 sys 0m18.041s 00:22:05.357 ************************************ 00:22:05.357 END TEST nvmf_host_multipath 00:22:05.357 ************************************ 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:05.357 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.617 ************************************ 00:22:05.617 START TEST nvmf_timeout 00:22:05.617 ************************************ 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:05.617 * Looking for test storage... 00:22:05.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:05.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.617 --rc genhtml_branch_coverage=1 00:22:05.617 --rc genhtml_function_coverage=1 00:22:05.617 --rc genhtml_legend=1 00:22:05.617 --rc geninfo_all_blocks=1 00:22:05.617 --rc geninfo_unexecuted_blocks=1 00:22:05.617 00:22:05.617 ' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:05.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.617 --rc genhtml_branch_coverage=1 00:22:05.617 --rc genhtml_function_coverage=1 00:22:05.617 --rc genhtml_legend=1 00:22:05.617 --rc geninfo_all_blocks=1 00:22:05.617 --rc geninfo_unexecuted_blocks=1 00:22:05.617 00:22:05.617 ' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:05.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.617 --rc genhtml_branch_coverage=1 00:22:05.617 --rc genhtml_function_coverage=1 00:22:05.617 --rc genhtml_legend=1 00:22:05.617 --rc geninfo_all_blocks=1 00:22:05.617 --rc geninfo_unexecuted_blocks=1 00:22:05.617 00:22:05.617 ' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:05.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.617 --rc genhtml_branch_coverage=1 00:22:05.617 --rc genhtml_function_coverage=1 00:22:05.617 --rc genhtml_legend=1 00:22:05.617 --rc geninfo_all_blocks=1 00:22:05.617 --rc geninfo_unexecuted_blocks=1 00:22:05.617 00:22:05.617 ' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.617 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.618 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.618 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:05.877 Cannot find device "nvmf_init_br" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:05.877 Cannot find device "nvmf_init_br2" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:05.877 Cannot find device "nvmf_tgt_br" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:05.877 Cannot find device "nvmf_tgt_br2" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:05.877 Cannot find device "nvmf_init_br" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:05.877 Cannot find device "nvmf_init_br2" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:05.877 Cannot find device "nvmf_tgt_br" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:05.877 Cannot find device "nvmf_tgt_br2" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:05.877 Cannot find device "nvmf_br" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:05.877 Cannot find device "nvmf_init_if" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:05.877 Cannot find device "nvmf_init_if2" 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:05.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:05.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:05.877 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:05.878 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:06.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:06.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:22:06.137 00:22:06.137 --- 10.0.0.3 ping statistics --- 00:22:06.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.137 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:06.137 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:06.137 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:22:06.137 00:22:06.137 --- 10.0.0.4 ping statistics --- 00:22:06.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.137 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:06.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:06.137 00:22:06.137 --- 10.0.0.1 ping statistics --- 00:22:06.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.137 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:06.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:06.137 00:22:06.137 --- 10.0.0.2 ping statistics --- 00:22:06.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.137 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81929 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81929 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81929 ']' 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:06.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.137 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.138 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:06.138 17:23:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:06.138 [2024-11-04 17:23:06.888163] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:22:06.138 [2024-11-04 17:23:06.888266] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.397 [2024-11-04 17:23:07.041323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:06.397 [2024-11-04 17:23:07.092857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.397 [2024-11-04 17:23:07.092918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.397 [2024-11-04 17:23:07.092932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.397 [2024-11-04 17:23:07.092943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.397 [2024-11-04 17:23:07.092953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.397 [2024-11-04 17:23:07.094185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.397 [2024-11-04 17:23:07.094199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.397 [2024-11-04 17:23:07.150675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.656 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:06.915 [2024-11-04 17:23:07.544364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.915 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:07.174 Malloc0 00:22:07.174 17:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:07.432 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:07.691 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:07.950 [2024-11-04 17:23:08.628170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:07.950 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81975 00:22:07.950 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:07.950 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81975 /var/tmp/bdevperf.sock 00:22:07.951 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81975 ']' 00:22:07.951 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.951 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:07.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.951 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.951 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:07.951 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:07.951 [2024-11-04 17:23:08.695968] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:22:07.951 [2024-11-04 17:23:08.696072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81975 ] 00:22:08.210 [2024-11-04 17:23:08.842611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.210 [2024-11-04 17:23:08.888072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.210 [2024-11-04 17:23:08.940620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:08.210 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:08.210 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:08.210 17:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:08.469 17:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:08.728 NVMe0n1 00:22:08.987 17:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81987 00:22:08.987 17:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:08.987 17:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:08.987 Running I/O for 10 seconds... 00:22:09.923 17:23:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:10.186 7700.00 IOPS, 30.08 MiB/s [2024-11-04T17:23:10.990Z] [2024-11-04 17:23:10.775868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.775933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.775961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.775970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.775978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.775986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.776010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.776018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.776026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.776034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.776041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.186 [2024-11-04 17:23:10.776049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.187 [2024-11-04 17:23:10.776784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b30 is same with the state(6) to be set 00:22:10.188 [2024-11-04 17:23:10.776972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.188 [2024-11-04 17:23:10.777668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.188 [2024-11-04 17:23:10.777679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.777985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.777993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.189 [2024-11-04 17:23:10.778430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-11-04 17:23:10.778439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.778989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.778998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-11-04 17:23:10.779235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.190 [2024-11-04 17:23:10.779246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-11-04 17:23:10.779256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-11-04 17:23:10.779276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-11-04 17:23:10.779296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-11-04 17:23:10.779316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.191 [2024-11-04 17:23:10.779617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-11-04 17:23:10.779637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18da280 is same with the state(6) to be set 00:22:10.191 [2024-11-04 17:23:10.779659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.191 [2024-11-04 17:23:10.779667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.191 [2024-11-04 17:23:10.779679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67744 len:8 PRP1 0x0 PRP2 0x0 00:22:10.191 [2024-11-04 17:23:10.779689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.191 [2024-11-04 17:23:10.779965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:10.191 [2024-11-04 17:23:10.780050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186ce50 (9): Bad file descriptor 00:22:10.191 [2024-11-04 17:23:10.780148] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.191 [2024-11-04 17:23:10.780169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186ce50 with addr=10.0.0.3, port=4420 00:22:10.191 [2024-11-04 17:23:10.780179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ce50 is same with the state(6) to be set 00:22:10.191 [2024-11-04 17:23:10.780197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186ce50 (9): Bad file descriptor 00:22:10.191 [2024-11-04 17:23:10.780227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:10.191 [2024-11-04 17:23:10.780240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:10.191 [2024-11-04 17:23:10.780250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:10.191 [2024-11-04 17:23:10.780261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:10.191 [2024-11-04 17:23:10.780272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:10.191 17:23:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:12.064 4178.00 IOPS, 16.32 MiB/s [2024-11-04T17:23:12.868Z] 2785.33 IOPS, 10.88 MiB/s [2024-11-04T17:23:12.868Z] [2024-11-04 17:23:12.780491] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.065 [2024-11-04 17:23:12.780568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186ce50 with addr=10.0.0.3, port=4420 00:22:12.065 [2024-11-04 17:23:12.780586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ce50 is same with the state(6) to be set 00:22:12.065 [2024-11-04 17:23:12.780611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186ce50 (9): Bad file descriptor 00:22:12.065 [2024-11-04 17:23:12.780630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:12.065 [2024-11-04 17:23:12.780640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:12.065 [2024-11-04 17:23:12.780651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:12.065 [2024-11-04 17:23:12.780663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:12.065 [2024-11-04 17:23:12.780675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:12.065 17:23:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:12.065 17:23:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:12.065 17:23:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:12.323 17:23:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:12.323 17:23:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:12.323 17:23:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:12.323 17:23:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:12.582 17:23:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:12.582 17:23:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:14.087 2089.00 IOPS, 8.16 MiB/s [2024-11-04T17:23:14.891Z] 1671.20 IOPS, 6.53 MiB/s [2024-11-04T17:23:14.891Z] [2024-11-04 17:23:14.780966] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.087 [2024-11-04 17:23:14.781031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186ce50 with addr=10.0.0.3, port=4420 00:22:14.087 [2024-11-04 17:23:14.781047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ce50 is same with the state(6) to be set 00:22:14.087 [2024-11-04 17:23:14.781072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186ce50 (9): Bad file descriptor 00:22:14.087 [2024-11-04 17:23:14.781090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:14.087 [2024-11-04 17:23:14.781099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:14.087 [2024-11-04 17:23:14.781110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:14.087 [2024-11-04 17:23:14.781121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:14.087 [2024-11-04 17:23:14.781132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:15.997 1392.67 IOPS, 5.44 MiB/s [2024-11-04T17:23:16.801Z] 1193.71 IOPS, 4.66 MiB/s [2024-11-04T17:23:16.801Z] [2024-11-04 17:23:16.781263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:15.997 [2024-11-04 17:23:16.781317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:15.997 [2024-11-04 17:23:16.781344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:15.997 [2024-11-04 17:23:16.781353] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:15.997 [2024-11-04 17:23:16.781365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:17.194 1044.50 IOPS, 4.08 MiB/s 00:22:17.194 Latency(us) 00:22:17.194 [2024-11-04T17:23:17.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.194 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:17.194 Verification LBA range: start 0x0 length 0x4000 00:22:17.194 NVMe0n1 : 8.11 1030.52 4.03 15.79 0.00 122124.15 3410.85 7046430.72 00:22:17.194 [2024-11-04T17:23:17.998Z] =================================================================================================================== 00:22:17.194 [2024-11-04T17:23:17.998Z] Total : 1030.52 4.03 15.79 0.00 122124.15 3410.85 7046430.72 00:22:17.194 { 00:22:17.194 "results": [ 00:22:17.194 { 00:22:17.194 "job": "NVMe0n1", 00:22:17.194 "core_mask": "0x4", 00:22:17.194 "workload": "verify", 00:22:17.194 "status": "finished", 00:22:17.194 "verify_range": { 00:22:17.194 "start": 0, 00:22:17.194 "length": 16384 00:22:17.194 }, 00:22:17.194 "queue_depth": 128, 00:22:17.194 "io_size": 4096, 00:22:17.194 "runtime": 8.108552, 00:22:17.194 "iops": 1030.5169159672405, 00:22:17.194 "mibps": 4.025456702997033, 00:22:17.194 "io_failed": 128, 00:22:17.194 "io_timeout": 0, 00:22:17.194 "avg_latency_us": 122124.15185289958, 00:22:17.194 "min_latency_us": 3410.850909090909, 00:22:17.194 "max_latency_us": 7046430.72 00:22:17.194 } 00:22:17.194 ], 00:22:17.194 "core_count": 1 00:22:17.194 } 00:22:17.762 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:17.762 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:17.762 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:18.021 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:18.021 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:18.021 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:18.021 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81987 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81975 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81975 ']' 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81975 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81975 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:18.280 killing process with pid 81975 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81975' 00:22:18.280 Received shutdown signal, test time was about 9.283991 seconds 00:22:18.280 00:22:18.280 Latency(us) 00:22:18.280 [2024-11-04T17:23:19.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.280 [2024-11-04T17:23:19.084Z] =================================================================================================================== 00:22:18.280 [2024-11-04T17:23:19.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81975 00:22:18.280 17:23:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81975 00:22:18.539 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:18.799 [2024-11-04 17:23:19.344734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82104 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82104 /var/tmp/bdevperf.sock 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82104 ']' 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:18.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:18.799 17:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:18.799 [2024-11-04 17:23:19.419905] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:22:18.799 [2024-11-04 17:23:19.419995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82104 ] 00:22:18.799 [2024-11-04 17:23:19.567964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.063 [2024-11-04 17:23:19.624684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.063 [2024-11-04 17:23:19.680315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.630 17:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:19.630 17:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:19.630 17:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:19.889 17:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:20.457 NVMe0n1 00:22:20.457 17:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82133 00:22:20.457 17:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.457 17:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:20.457 Running I/O for 10 seconds... 00:22:21.392 17:23:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:21.653 7445.00 IOPS, 29.08 MiB/s [2024-11-04T17:23:22.457Z] [2024-11-04 17:23:22.247122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.653 [2024-11-04 17:23:22.247509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.653 [2024-11-04 17:23:22.247518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.247983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.247994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.654 [2024-11-04 17:23:22.248251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.654 [2024-11-04 17:23:22.248262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.248713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.248983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.248994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.249003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.249014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.655 [2024-11-04 17:23:22.249024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.249035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.249044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.249055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.655 [2024-11-04 17:23:22.249072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.655 [2024-11-04 17:23:22.249084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.656 [2024-11-04 17:23:22.249245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.656 [2024-11-04 17:23:22.249765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.656 [2024-11-04 17:23:22.249774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.249785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.657 [2024-11-04 17:23:22.249794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.249805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.657 [2024-11-04 17:23:22.249814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.249825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.657 [2024-11-04 17:23:22.249834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.249846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.657 [2024-11-04 17:23:22.249855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.249866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.657 [2024-11-04 17:23:22.249875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.249886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.657 [2024-11-04 17:23:22.249895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.249906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa140 is same with the state(6) to be set 00:22:21.657 [2024-11-04 17:23:22.249918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.657 [2024-11-04 17:23:22.249926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.657 [2024-11-04 17:23:22.249934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68176 len:8 PRP1 0x0 PRP2 0x0 00:22:21.657 [2024-11-04 17:23:22.249943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.250119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.657 [2024-11-04 17:23:22.250149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.250161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.657 [2024-11-04 17:23:22.250176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.250186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.657 [2024-11-04 17:23:22.250196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.250219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.657 [2024-11-04 17:23:22.250234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.657 [2024-11-04 17:23:22.250243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:21.657 [2024-11-04 17:23:22.250482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:21.657 [2024-11-04 17:23:22.250515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:21.657 [2024-11-04 17:23:22.250610] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.657 [2024-11-04 17:23:22.250637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153ce50 with addr=10.0.0.3, port=4420 00:22:21.657 [2024-11-04 17:23:22.250648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:21.657 [2024-11-04 17:23:22.250683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:21.657 [2024-11-04 17:23:22.250705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:21.657 [2024-11-04 17:23:22.250715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:21.657 [2024-11-04 17:23:22.250725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:21.657 [2024-11-04 17:23:22.250736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:21.657 [2024-11-04 17:23:22.250747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:21.657 17:23:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:22.599 4234.50 IOPS, 16.54 MiB/s [2024-11-04T17:23:23.403Z] [2024-11-04 17:23:23.250903] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.599 [2024-11-04 17:23:23.250985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153ce50 with addr=10.0.0.3, port=4420 00:22:22.599 [2024-11-04 17:23:23.251009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:22.599 [2024-11-04 17:23:23.251033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:22.599 [2024-11-04 17:23:23.251051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:22.599 [2024-11-04 17:23:23.251061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:22.599 [2024-11-04 17:23:23.251071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:22.599 [2024-11-04 17:23:23.251082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:22.599 [2024-11-04 17:23:23.251093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:22.599 17:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:22.858 [2024-11-04 17:23:23.490882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:22.858 17:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82133 00:22:23.685 2823.00 IOPS, 11.03 MiB/s [2024-11-04T17:23:24.489Z] [2024-11-04 17:23:24.268355] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:25.555 2117.25 IOPS, 8.27 MiB/s [2024-11-04T17:23:27.294Z] 3397.40 IOPS, 13.27 MiB/s [2024-11-04T17:23:28.229Z] 4489.83 IOPS, 17.54 MiB/s [2024-11-04T17:23:29.166Z] 5286.14 IOPS, 20.65 MiB/s [2024-11-04T17:23:30.546Z] 5865.38 IOPS, 22.91 MiB/s [2024-11-04T17:23:31.484Z] 6314.11 IOPS, 24.66 MiB/s [2024-11-04T17:23:31.484Z] 6673.10 IOPS, 26.07 MiB/s 00:22:30.680 Latency(us) 00:22:30.680 [2024-11-04T17:23:31.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.680 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:30.680 Verification LBA range: start 0x0 length 0x4000 00:22:30.680 NVMe0n1 : 10.01 6678.02 26.09 0.00 0.00 19126.79 1675.64 3019898.88 00:22:30.680 [2024-11-04T17:23:31.484Z] =================================================================================================================== 00:22:30.680 [2024-11-04T17:23:31.484Z] Total : 6678.02 26.09 0.00 0.00 19126.79 1675.64 3019898.88 00:22:30.680 { 00:22:30.680 "results": [ 00:22:30.680 { 00:22:30.680 "job": "NVMe0n1", 00:22:30.680 "core_mask": "0x4", 00:22:30.680 "workload": "verify", 00:22:30.680 "status": "finished", 00:22:30.680 "verify_range": { 00:22:30.680 "start": 0, 00:22:30.680 "length": 16384 00:22:30.680 }, 00:22:30.680 "queue_depth": 128, 00:22:30.680 "io_size": 4096, 00:22:30.680 "runtime": 10.009398, 00:22:30.680 "iops": 6678.023993051331, 00:22:30.680 "mibps": 26.08603122285676, 00:22:30.680 "io_failed": 0, 00:22:30.680 "io_timeout": 0, 00:22:30.680 "avg_latency_us": 19126.786102250455, 00:22:30.680 "min_latency_us": 1675.6363636363637, 00:22:30.680 "max_latency_us": 3019898.88 00:22:30.680 } 00:22:30.680 ], 00:22:30.680 "core_count": 1 00:22:30.680 } 00:22:30.680 17:23:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82238 00:22:30.680 17:23:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.680 17:23:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:30.680 Running I/O for 10 seconds... 00:22:31.618 17:23:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:31.880 7333.00 IOPS, 28.64 MiB/s [2024-11-04T17:23:32.684Z] [2024-11-04 17:23:32.424435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.880 [2024-11-04 17:23:32.424520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.880 [2024-11-04 17:23:32.424559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.880 [2024-11-04 17:23:32.424578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.880 [2024-11-04 17:23:32.424597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:31.880 [2024-11-04 17:23:32.424867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.880 [2024-11-04 17:23:32.424885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.424914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.424935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.424956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.424977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.424988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.424997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.880 [2024-11-04 17:23:32.425524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.880 [2024-11-04 17:23:32.425535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.425990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.425999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.881 [2024-11-04 17:23:32.426386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.881 [2024-11-04 17:23:32.426397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.426981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.426993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.882 [2024-11-04 17:23:32.427248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.882 [2024-11-04 17:23:32.427259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.883 [2024-11-04 17:23:32.427563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.883 [2024-11-04 17:23:32.427588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ab360 is same with the state(6) to be set 00:22:31.883 [2024-11-04 17:23:32.427610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.883 [2024-11-04 17:23:32.427618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.883 [2024-11-04 17:23:32.427626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68768 len:8 PRP1 0x0 PRP2 0x0 00:22:31.883 [2024-11-04 17:23:32.427635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.883 [2024-11-04 17:23:32.427899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:31.883 [2024-11-04 17:23:32.427927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:31.883 [2024-11-04 17:23:32.428021] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.883 [2024-11-04 17:23:32.428043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153ce50 with addr=10.0.0.3, port=4420 00:22:31.883 [2024-11-04 17:23:32.428055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:31.883 [2024-11-04 17:23:32.428072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:31.883 [2024-11-04 17:23:32.428088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:31.883 [2024-11-04 17:23:32.428097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:31.883 [2024-11-04 17:23:32.428107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:31.883 [2024-11-04 17:23:32.428118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:31.883 [2024-11-04 17:23:32.428128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:31.883 17:23:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:32.821 4234.50 IOPS, 16.54 MiB/s [2024-11-04T17:23:33.625Z] [2024-11-04 17:23:33.428252] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.821 [2024-11-04 17:23:33.428335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153ce50 with addr=10.0.0.3, port=4420 00:22:32.821 [2024-11-04 17:23:33.428351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:32.821 [2024-11-04 17:23:33.428374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:32.821 [2024-11-04 17:23:33.428392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:32.821 [2024-11-04 17:23:33.428401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:32.821 [2024-11-04 17:23:33.428410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:32.821 [2024-11-04 17:23:33.428421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:32.821 [2024-11-04 17:23:33.428431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:33.759 2823.00 IOPS, 11.03 MiB/s [2024-11-04T17:23:34.563Z] [2024-11-04 17:23:34.428658] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.759 [2024-11-04 17:23:34.428766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153ce50 with addr=10.0.0.3, port=4420 00:22:33.759 [2024-11-04 17:23:34.428788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:33.759 [2024-11-04 17:23:34.428824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:33.759 [2024-11-04 17:23:34.428849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:33.759 [2024-11-04 17:23:34.428861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:33.759 [2024-11-04 17:23:34.428874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:33.759 [2024-11-04 17:23:34.428894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:33.759 [2024-11-04 17:23:34.428908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:34.696 2117.25 IOPS, 8.27 MiB/s [2024-11-04T17:23:35.500Z] [2024-11-04 17:23:35.432409] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.696 [2024-11-04 17:23:35.432523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153ce50 with addr=10.0.0.3, port=4420 00:22:34.696 [2024-11-04 17:23:35.432545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153ce50 is same with the state(6) to be set 00:22:34.696 [2024-11-04 17:23:35.432801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ce50 (9): Bad file descriptor 00:22:34.696 [2024-11-04 17:23:35.433029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:34.696 [2024-11-04 17:23:35.433054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:34.696 [2024-11-04 17:23:35.433069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:34.696 [2024-11-04 17:23:35.433084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:34.696 [2024-11-04 17:23:35.433099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:34.696 17:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:34.955 [2024-11-04 17:23:35.713295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:34.955 17:23:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82238 00:22:35.784 1693.80 IOPS, 6.62 MiB/s [2024-11-04T17:23:36.588Z] [2024-11-04 17:23:36.457710] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:37.658 2826.00 IOPS, 11.04 MiB/s [2024-11-04T17:23:39.399Z] 3891.86 IOPS, 15.20 MiB/s [2024-11-04T17:23:40.335Z] 4693.25 IOPS, 18.33 MiB/s [2024-11-04T17:23:41.292Z] 5327.56 IOPS, 20.81 MiB/s [2024-11-04T17:23:41.292Z] 5824.40 IOPS, 22.75 MiB/s 00:22:40.488 Latency(us) 00:22:40.488 [2024-11-04T17:23:41.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.488 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:40.488 Verification LBA range: start 0x0 length 0x4000 00:22:40.488 NVMe0n1 : 10.01 5832.57 22.78 4209.26 0.00 12723.01 703.77 3019898.88 00:22:40.488 [2024-11-04T17:23:41.292Z] =================================================================================================================== 00:22:40.488 [2024-11-04T17:23:41.292Z] Total : 5832.57 22.78 4209.26 0.00 12723.01 0.00 3019898.88 00:22:40.488 { 00:22:40.488 "results": [ 00:22:40.488 { 00:22:40.488 "job": "NVMe0n1", 00:22:40.488 "core_mask": "0x4", 00:22:40.489 "workload": "verify", 00:22:40.489 "status": "finished", 00:22:40.489 "verify_range": { 00:22:40.489 "start": 0, 00:22:40.489 "length": 16384 00:22:40.489 }, 00:22:40.489 "queue_depth": 128, 00:22:40.489 "io_size": 4096, 00:22:40.489 "runtime": 10.007932, 00:22:40.489 "iops": 5832.573602618403, 00:22:40.489 "mibps": 22.783490635228137, 00:22:40.489 "io_failed": 42126, 00:22:40.489 "io_timeout": 0, 00:22:40.489 "avg_latency_us": 12723.013064176763, 00:22:40.489 "min_latency_us": 703.7672727272727, 00:22:40.489 "max_latency_us": 3019898.88 00:22:40.489 } 00:22:40.489 ], 00:22:40.489 "core_count": 1 00:22:40.489 } 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82104 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82104 ']' 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82104 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82104 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:40.747 killing process with pid 82104 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82104' 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82104 00:22:40.747 Received shutdown signal, test time was about 10.000000 seconds 00:22:40.747 00:22:40.747 Latency(us) 00:22:40.747 [2024-11-04T17:23:41.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.747 [2024-11-04T17:23:41.551Z] =================================================================================================================== 00:22:40.747 [2024-11-04T17:23:41.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82104 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82352 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82352 /var/tmp/bdevperf.sock 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82352 ']' 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.747 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:40.748 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.748 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:41.008 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:41.008 [2024-11-04 17:23:41.593699] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:22:41.008 [2024-11-04 17:23:41.593811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82352 ] 00:22:41.008 [2024-11-04 17:23:41.735101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.008 [2024-11-04 17:23:41.788755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.266 [2024-11-04 17:23:41.848176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:41.266 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:41.266 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:41.266 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82355 00:22:41.266 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:41.266 17:23:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82352 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:41.525 17:23:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:41.784 NVMe0n1 00:22:41.784 17:23:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82402 00:22:41.784 17:23:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:41.784 17:23:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.042 Running I/O for 10 seconds... 00:22:42.978 17:23:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:43.240 15748.00 IOPS, 61.52 MiB/s [2024-11-04T17:23:44.044Z] [2024-11-04 17:23:43.784495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.240 [2024-11-04 17:23:43.784562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-04 17:23:43.784597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:43.241 the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-04 17:23:43.784623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.241 the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-04 17:23:43.784654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:43.241 the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.241 [2024-11-04 17:23:43.784671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.241 [2024-11-04 17:23:43.784687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.241 [2024-11-04 17:23:43.784695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.241 [2024-11-04 17:23:43.784711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.241 [2024-11-04 17:23:43.784727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93e50 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.784999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.785007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.785014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.785022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.241 [2024-11-04 17:23:43.785030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de6aa0 is same with the state(6) to be set 00:22:43.242 [2024-11-04 17:23:43.785666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.242 [2024-11-04 17:23:43.785694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.242 [2024-11-04 17:23:43.785726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.242 [2024-11-04 17:23:43.785744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.242 [2024-11-04 17:23:43.785763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.242 [2024-11-04 17:23:43.785778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.242 [2024-11-04 17:23:43.785794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.242 [2024-11-04 17:23:43.785808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.242 [2024-11-04 17:23:43.785823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.242 [2024-11-04 17:23:43.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.785854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.785869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.785885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.785900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.785918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.785932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.785947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.785961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.785976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.785990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.243 [2024-11-04 17:23:43.786923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.243 [2024-11-04 17:23:43.786939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.786953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.786969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.786982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.786998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.787977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.787991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.788006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.788019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.788034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.788049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.788064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.788076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.788092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.788105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.788120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.788151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.244 [2024-11-04 17:23:43.788167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.244 [2024-11-04 17:23:43.788180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.788972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.788992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.245 [2024-11-04 17:23:43.789331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.245 [2024-11-04 17:23:43.789349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.246 [2024-11-04 17:23:43.789974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.789990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01140 is same with the state(6) to be set 00:22:43.246 [2024-11-04 17:23:43.790008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.246 [2024-11-04 17:23:43.790020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.246 [2024-11-04 17:23:43.790031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43848 len:8 PRP1 0x0 PRP2 0x0 00:22:43.246 [2024-11-04 17:23:43.790044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.246 [2024-11-04 17:23:43.790556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:43.246 [2024-11-04 17:23:43.790619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93e50 (9): Bad file descriptor 00:22:43.246 [2024-11-04 17:23:43.790798] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.246 [2024-11-04 17:23:43.790830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a93e50 with addr=10.0.0.3, port=4420 00:22:43.246 [2024-11-04 17:23:43.790848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93e50 is same with the state(6) to be set 00:22:43.246 [2024-11-04 17:23:43.790876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93e50 (9): Bad file descriptor 00:22:43.246 [2024-11-04 17:23:43.790902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:43.246 [2024-11-04 17:23:43.790918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:43.246 [2024-11-04 17:23:43.790934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:43.246 [2024-11-04 17:23:43.790951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:43.246 [2024-11-04 17:23:43.790967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:43.246 17:23:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82402 00:22:45.121 9018.00 IOPS, 35.23 MiB/s [2024-11-04T17:23:45.925Z] 6012.00 IOPS, 23.48 MiB/s [2024-11-04T17:23:45.925Z] [2024-11-04 17:23:45.791185] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.121 [2024-11-04 17:23:45.791268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a93e50 with addr=10.0.0.3, port=4420 00:22:45.121 [2024-11-04 17:23:45.791293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93e50 is same with the state(6) to be set 00:22:45.121 [2024-11-04 17:23:45.791331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93e50 (9): Bad file descriptor 00:22:45.121 [2024-11-04 17:23:45.791435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:45.121 [2024-11-04 17:23:45.791457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:45.121 [2024-11-04 17:23:45.791476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:45.121 [2024-11-04 17:23:45.791495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:45.121 [2024-11-04 17:23:45.791515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:47.010 4509.00 IOPS, 17.61 MiB/s [2024-11-04T17:23:47.814Z] 3607.20 IOPS, 14.09 MiB/s [2024-11-04T17:23:47.814Z] [2024-11-04 17:23:47.791710] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.010 [2024-11-04 17:23:47.791786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a93e50 with addr=10.0.0.3, port=4420 00:22:47.010 [2024-11-04 17:23:47.791803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93e50 is same with the state(6) to be set 00:22:47.010 [2024-11-04 17:23:47.791828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93e50 (9): Bad file descriptor 00:22:47.010 [2024-11-04 17:23:47.791848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:47.010 [2024-11-04 17:23:47.791858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:47.010 [2024-11-04 17:23:47.791868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:47.010 [2024-11-04 17:23:47.791879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:47.010 [2024-11-04 17:23:47.791891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:48.896 3006.00 IOPS, 11.74 MiB/s [2024-11-04T17:23:49.958Z] 2576.57 IOPS, 10.06 MiB/s [2024-11-04T17:23:49.958Z] [2024-11-04 17:23:49.791972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:49.154 [2024-11-04 17:23:49.792024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:49.154 [2024-11-04 17:23:49.792035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:49.154 [2024-11-04 17:23:49.792044] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:49.154 [2024-11-04 17:23:49.792055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:50.089 2254.50 IOPS, 8.81 MiB/s 00:22:50.089 Latency(us) 00:22:50.089 [2024-11-04T17:23:50.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.089 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:50.089 NVMe0n1 : 8.16 2210.40 8.63 15.69 0.00 57519.84 7298.33 7046430.72 00:22:50.089 [2024-11-04T17:23:50.893Z] =================================================================================================================== 00:22:50.089 [2024-11-04T17:23:50.893Z] Total : 2210.40 8.63 15.69 0.00 57519.84 7298.33 7046430.72 00:22:50.089 { 00:22:50.089 "results": [ 00:22:50.089 { 00:22:50.089 "job": "NVMe0n1", 00:22:50.089 "core_mask": "0x4", 00:22:50.089 "workload": "randread", 00:22:50.089 "status": "finished", 00:22:50.089 "queue_depth": 128, 00:22:50.089 "io_size": 4096, 00:22:50.089 "runtime": 8.159604, 00:22:50.089 "iops": 2210.40138712614, 00:22:50.089 "mibps": 8.634380418461484, 00:22:50.089 "io_failed": 128, 00:22:50.089 "io_timeout": 0, 00:22:50.089 "avg_latency_us": 57519.83911273048, 00:22:50.089 "min_latency_us": 7298.327272727272, 00:22:50.089 "max_latency_us": 7046430.72 00:22:50.089 } 00:22:50.089 ], 00:22:50.089 "core_count": 1 00:22:50.089 } 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:50.089 Attaching 5 probes... 00:22:50.089 1349.720592: reset bdev controller NVMe0 00:22:50.089 1349.881307: reconnect bdev controller NVMe0 00:22:50.089 3350.225755: reconnect delay bdev controller NVMe0 00:22:50.089 3350.247458: reconnect bdev controller NVMe0 00:22:50.089 5350.782332: reconnect delay bdev controller NVMe0 00:22:50.089 5350.803859: reconnect bdev controller NVMe0 00:22:50.089 7351.135426: reconnect delay bdev controller NVMe0 00:22:50.089 7351.168612: reconnect bdev controller NVMe0 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82355 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82352 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82352 ']' 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82352 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82352 00:22:50.089 killing process with pid 82352 00:22:50.089 Received shutdown signal, test time was about 8.227295 seconds 00:22:50.089 00:22:50.089 Latency(us) 00:22:50.089 [2024-11-04T17:23:50.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.089 [2024-11-04T17:23:50.893Z] =================================================================================================================== 00:22:50.089 [2024-11-04T17:23:50.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82352' 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82352 00:22:50.089 17:23:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82352 00:22:50.348 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.607 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.607 rmmod nvme_tcp 00:22:50.607 rmmod nvme_fabrics 00:22:50.867 rmmod nvme_keyring 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81929 ']' 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81929 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81929 ']' 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81929 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81929 00:22:50.867 killing process with pid 81929 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81929' 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81929 00:22:50.867 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81929 00:22:51.125 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.125 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.126 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.385 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:22:51.385 00:22:51.385 real 0m45.751s 00:22:51.385 user 2m13.873s 00:22:51.385 sys 0m5.706s 00:22:51.385 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.385 17:23:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:51.385 ************************************ 00:22:51.385 END TEST nvmf_timeout 00:22:51.385 ************************************ 00:22:51.385 17:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:51.385 17:23:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:51.385 00:22:51.385 real 5m8.855s 00:22:51.385 user 13m25.812s 00:22:51.385 sys 1m10.665s 00:22:51.385 17:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.385 17:23:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.385 ************************************ 00:22:51.385 END TEST nvmf_host 00:22:51.385 ************************************ 00:22:51.385 17:23:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:22:51.385 17:23:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:22:51.385 00:22:51.385 real 12m48.443s 00:22:51.385 user 30m46.323s 00:22:51.385 sys 3m12.743s 00:22:51.385 17:23:52 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.385 17:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:51.385 ************************************ 00:22:51.385 END TEST nvmf_tcp 00:22:51.385 ************************************ 00:22:51.385 17:23:52 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:22:51.385 17:23:52 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:51.385 17:23:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:51.385 17:23:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:51.385 17:23:52 -- common/autotest_common.sh@10 -- # set +x 00:22:51.385 ************************************ 00:22:51.385 START TEST nvmf_dif 00:22:51.385 ************************************ 00:22:51.385 17:23:52 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:51.385 * Looking for test storage... 00:22:51.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:51.385 17:23:52 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:51.385 17:23:52 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:22:51.385 17:23:52 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:51.645 17:23:52 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:22:51.645 17:23:52 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.645 17:23:52 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:51.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.645 --rc genhtml_branch_coverage=1 00:22:51.645 --rc genhtml_function_coverage=1 00:22:51.645 --rc genhtml_legend=1 00:22:51.645 --rc geninfo_all_blocks=1 00:22:51.645 --rc geninfo_unexecuted_blocks=1 00:22:51.645 00:22:51.645 ' 00:22:51.645 17:23:52 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:51.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.645 --rc genhtml_branch_coverage=1 00:22:51.645 --rc genhtml_function_coverage=1 00:22:51.645 --rc genhtml_legend=1 00:22:51.645 --rc geninfo_all_blocks=1 00:22:51.645 --rc geninfo_unexecuted_blocks=1 00:22:51.645 00:22:51.645 ' 00:22:51.645 17:23:52 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:51.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.645 --rc genhtml_branch_coverage=1 00:22:51.645 --rc genhtml_function_coverage=1 00:22:51.645 --rc genhtml_legend=1 00:22:51.645 --rc geninfo_all_blocks=1 00:22:51.645 --rc geninfo_unexecuted_blocks=1 00:22:51.645 00:22:51.645 ' 00:22:51.645 17:23:52 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:51.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.645 --rc genhtml_branch_coverage=1 00:22:51.645 --rc genhtml_function_coverage=1 00:22:51.645 --rc genhtml_legend=1 00:22:51.645 --rc geninfo_all_blocks=1 00:22:51.645 --rc geninfo_unexecuted_blocks=1 00:22:51.645 00:22:51.645 ' 00:22:51.645 17:23:52 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.645 17:23:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.645 17:23:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.645 17:23:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.645 17:23:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.645 17:23:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:51.645 17:23:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.645 17:23:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.646 17:23:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:51.646 17:23:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:51.646 17:23:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:51.646 17:23:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:51.646 17:23:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.646 17:23:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:51.646 17:23:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:51.646 Cannot find device "nvmf_init_br" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:51.646 Cannot find device "nvmf_init_br2" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:51.646 Cannot find device "nvmf_tgt_br" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@164 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.646 Cannot find device "nvmf_tgt_br2" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@165 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:51.646 Cannot find device "nvmf_init_br" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@166 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:51.646 Cannot find device "nvmf_init_br2" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@167 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:51.646 Cannot find device "nvmf_tgt_br" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@168 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:51.646 Cannot find device "nvmf_tgt_br2" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@169 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:51.646 Cannot find device "nvmf_br" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@170 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:51.646 Cannot find device "nvmf_init_if" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@171 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:51.646 Cannot find device "nvmf_init_if2" 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@172 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@173 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@174 -- # true 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:51.646 17:23:52 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:51.916 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:51.916 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:51.916 00:22:51.916 --- 10.0.0.3 ping statistics --- 00:22:51.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.916 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:51.916 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:51.916 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:22:51.916 00:22:51.916 --- 10.0.0.4 ping statistics --- 00:22:51.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.916 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:51.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:51.916 00:22:51.916 --- 10.0.0.1 ping statistics --- 00:22:51.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.916 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:51.916 17:23:52 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:52.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:52.187 00:22:52.187 --- 10.0.0.2 ping statistics --- 00:22:52.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.187 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:52.187 17:23:52 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.187 17:23:52 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:22:52.187 17:23:52 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:52.187 17:23:52 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:52.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:52.446 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:52.446 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.446 17:23:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:52.446 17:23:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.446 17:23:53 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.446 17:23:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:52.446 17:23:53 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82897 00:22:52.447 17:23:53 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:52.447 17:23:53 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82897 00:22:52.447 17:23:53 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 82897 ']' 00:22:52.447 17:23:53 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.447 17:23:53 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:52.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.447 17:23:53 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.447 17:23:53 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:52.447 17:23:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:52.447 [2024-11-04 17:23:53.189627] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:22:52.447 [2024-11-04 17:23:53.189708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.706 [2024-11-04 17:23:53.345927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.706 [2024-11-04 17:23:53.398074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.706 [2024-11-04 17:23:53.398143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.706 [2024-11-04 17:23:53.398158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.706 [2024-11-04 17:23:53.398168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.706 [2024-11-04 17:23:53.398176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.706 [2024-11-04 17:23:53.398620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.706 [2024-11-04 17:23:53.454723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:22:52.965 17:23:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:52.965 17:23:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.965 17:23:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:52.965 17:23:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:52.965 [2024-11-04 17:23:53.574395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.965 17:23:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:52.965 17:23:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:52.965 ************************************ 00:22:52.965 START TEST fio_dif_1_default 00:22:52.965 ************************************ 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:52.965 bdev_null0 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:52.965 [2024-11-04 17:23:53.618555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:52.965 { 00:22:52.965 "params": { 00:22:52.965 "name": "Nvme$subsystem", 00:22:52.965 "trtype": "$TEST_TRANSPORT", 00:22:52.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:52.965 "adrfam": "ipv4", 00:22:52.965 "trsvcid": "$NVMF_PORT", 00:22:52.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:52.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:52.965 "hdgst": ${hdgst:-false}, 00:22:52.965 "ddgst": ${ddgst:-false} 00:22:52.965 }, 00:22:52.965 "method": "bdev_nvme_attach_controller" 00:22:52.965 } 00:22:52.965 EOF 00:22:52.965 )") 00:22:52.965 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:52.966 "params": { 00:22:52.966 "name": "Nvme0", 00:22:52.966 "trtype": "tcp", 00:22:52.966 "traddr": "10.0.0.3", 00:22:52.966 "adrfam": "ipv4", 00:22:52.966 "trsvcid": "4420", 00:22:52.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:52.966 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:52.966 "hdgst": false, 00:22:52.966 "ddgst": false 00:22:52.966 }, 00:22:52.966 "method": "bdev_nvme_attach_controller" 00:22:52.966 }' 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:52.966 17:23:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:53.225 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:53.225 fio-3.35 00:22:53.225 Starting 1 thread 00:23:05.434 00:23:05.434 filename0: (groupid=0, jobs=1): err= 0: pid=82960: Mon Nov 4 17:24:04 2024 00:23:05.434 read: IOPS=9442, BW=36.9MiB/s (38.7MB/s)(369MiB/10001msec) 00:23:05.434 slat (usec): min=5, max=123, avg= 7.92, stdev= 3.58 00:23:05.434 clat (usec): min=308, max=2760, avg=400.20, stdev=62.11 00:23:05.434 lat (usec): min=314, max=2785, avg=408.12, stdev=63.10 00:23:05.434 clat percentiles (usec): 00:23:05.434 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 355], 00:23:05.434 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 400], 00:23:05.434 | 70.00th=[ 420], 80.00th=[ 445], 90.00th=[ 482], 95.00th=[ 515], 00:23:05.434 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 693], 99.95th=[ 725], 00:23:05.434 | 99.99th=[ 1778] 00:23:05.434 bw ( KiB/s): min=31104, max=41440, per=99.73%, avg=37668.26, stdev=3222.13, samples=19 00:23:05.434 iops : min= 7776, max=10360, avg=9417.05, stdev=805.53, samples=19 00:23:05.434 lat (usec) : 500=93.44%, 750=6.53%, 1000=0.02% 00:23:05.434 lat (msec) : 2=0.01%, 4=0.01% 00:23:05.434 cpu : usr=85.43%, sys=12.69%, ctx=52, majf=0, minf=9 00:23:05.434 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.434 issued rwts: total=94432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.434 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:05.434 00:23:05.434 Run status group 0 (all jobs): 00:23:05.434 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=369MiB (387MB), run=10001-10001msec 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 00:23:05.434 real 0m11.051s 00:23:05.434 user 0m9.211s 00:23:05.434 sys 0m1.555s 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:05.434 ************************************ 00:23:05.434 END TEST fio_dif_1_default 00:23:05.434 ************************************ 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:05.434 17:24:04 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:05.434 17:24:04 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 ************************************ 00:23:05.434 START TEST fio_dif_1_multi_subsystems 00:23:05.434 ************************************ 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 bdev_null0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 [2024-11-04 17:24:04.728132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 bdev_null1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:05.434 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:05.434 { 00:23:05.435 "params": { 00:23:05.435 "name": "Nvme$subsystem", 00:23:05.435 "trtype": "$TEST_TRANSPORT", 00:23:05.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.435 "adrfam": "ipv4", 00:23:05.435 "trsvcid": "$NVMF_PORT", 00:23:05.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.435 "hdgst": ${hdgst:-false}, 00:23:05.435 "ddgst": ${ddgst:-false} 00:23:05.435 }, 00:23:05.435 "method": "bdev_nvme_attach_controller" 00:23:05.435 } 00:23:05.435 EOF 00:23:05.435 )") 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:05.435 { 00:23:05.435 "params": { 00:23:05.435 "name": "Nvme$subsystem", 00:23:05.435 "trtype": "$TEST_TRANSPORT", 00:23:05.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.435 "adrfam": "ipv4", 00:23:05.435 "trsvcid": "$NVMF_PORT", 00:23:05.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.435 "hdgst": ${hdgst:-false}, 00:23:05.435 "ddgst": ${ddgst:-false} 00:23:05.435 }, 00:23:05.435 "method": "bdev_nvme_attach_controller" 00:23:05.435 } 00:23:05.435 EOF 00:23:05.435 )") 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:05.435 "params": { 00:23:05.435 "name": "Nvme0", 00:23:05.435 "trtype": "tcp", 00:23:05.435 "traddr": "10.0.0.3", 00:23:05.435 "adrfam": "ipv4", 00:23:05.435 "trsvcid": "4420", 00:23:05.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:05.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:05.435 "hdgst": false, 00:23:05.435 "ddgst": false 00:23:05.435 }, 00:23:05.435 "method": "bdev_nvme_attach_controller" 00:23:05.435 },{ 00:23:05.435 "params": { 00:23:05.435 "name": "Nvme1", 00:23:05.435 "trtype": "tcp", 00:23:05.435 "traddr": "10.0.0.3", 00:23:05.435 "adrfam": "ipv4", 00:23:05.435 "trsvcid": "4420", 00:23:05.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.435 "hdgst": false, 00:23:05.435 "ddgst": false 00:23:05.435 }, 00:23:05.435 "method": "bdev_nvme_attach_controller" 00:23:05.435 }' 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:05.435 17:24:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.435 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:05.435 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:05.435 fio-3.35 00:23:05.435 Starting 2 threads 00:23:15.417 00:23:15.417 filename0: (groupid=0, jobs=1): err= 0: pid=83122: Mon Nov 4 17:24:15 2024 00:23:15.417 read: IOPS=5132, BW=20.0MiB/s (21.0MB/s)(201MiB/10001msec) 00:23:15.417 slat (nsec): min=5900, max=79823, avg=13822.96, stdev=6298.74 00:23:15.417 clat (usec): min=405, max=4521, avg=741.79, stdev=86.58 00:23:15.417 lat (usec): min=412, max=4545, avg=755.61, stdev=88.30 00:23:15.417 clat percentiles (usec): 00:23:15.417 | 1.00th=[ 603], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 676], 00:23:15.417 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 742], 00:23:15.417 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 889], 00:23:15.417 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1156], 99.95th=[ 1205], 00:23:15.417 | 99.99th=[ 1287] 00:23:15.417 bw ( KiB/s): min=18912, max=21344, per=50.01%, avg=20527.16, stdev=735.49, samples=19 00:23:15.417 iops : min= 4728, max= 5336, avg=5131.79, stdev=183.87, samples=19 00:23:15.417 lat (usec) : 500=0.04%, 750=63.33%, 1000=35.52% 00:23:15.417 lat (msec) : 2=1.11%, 10=0.01% 00:23:15.417 cpu : usr=90.11%, sys=8.39%, ctx=12, majf=0, minf=0 00:23:15.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:15.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.417 issued rwts: total=51328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.417 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:15.417 filename1: (groupid=0, jobs=1): err= 0: pid=83123: Mon Nov 4 17:24:15 2024 00:23:15.417 read: IOPS=5129, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:23:15.417 slat (usec): min=5, max=361, avg=13.82, stdev= 6.50 00:23:15.417 clat (usec): min=369, max=3857, avg=741.79, stdev=84.60 00:23:15.417 lat (usec): min=375, max=3883, avg=755.61, stdev=85.79 00:23:15.417 clat percentiles (usec): 00:23:15.417 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:23:15.417 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 742], 00:23:15.417 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 840], 95.00th=[ 889], 00:23:15.417 | 99.00th=[ 1020], 99.50th=[ 1074], 99.90th=[ 1172], 99.95th=[ 1254], 00:23:15.417 | 99.99th=[ 2573] 00:23:15.417 bw ( KiB/s): min=18912, max=21344, per=49.97%, avg=20512.00, stdev=734.68, samples=19 00:23:15.417 iops : min= 4728, max= 5336, avg=5128.00, stdev=183.67, samples=19 00:23:15.417 lat (usec) : 500=0.02%, 750=64.35%, 1000=34.43% 00:23:15.417 lat (msec) : 2=1.18%, 4=0.02% 00:23:15.417 cpu : usr=90.08%, sys=8.08%, ctx=60, majf=0, minf=0 00:23:15.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:15.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:15.417 issued rwts: total=51300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:15.418 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:15.418 00:23:15.418 Run status group 0 (all jobs): 00:23:15.418 READ: bw=40.1MiB/s (42.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=401MiB (420MB), run=10001-10001msec 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 ************************************ 00:23:15.418 END TEST fio_dif_1_multi_subsystems 00:23:15.418 ************************************ 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 00:23:15.418 real 0m11.143s 00:23:15.418 user 0m18.789s 00:23:15.418 sys 0m1.940s 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 17:24:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:15.418 17:24:15 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:15.418 17:24:15 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 ************************************ 00:23:15.418 START TEST fio_dif_rand_params 00:23:15.418 ************************************ 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 bdev_null0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:15.418 [2024-11-04 17:24:15.927411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:15.418 { 00:23:15.418 "params": { 00:23:15.418 "name": "Nvme$subsystem", 00:23:15.418 "trtype": "$TEST_TRANSPORT", 00:23:15.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.418 "adrfam": "ipv4", 00:23:15.418 "trsvcid": "$NVMF_PORT", 00:23:15.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.418 "hdgst": ${hdgst:-false}, 00:23:15.418 "ddgst": ${ddgst:-false} 00:23:15.418 }, 00:23:15.418 "method": "bdev_nvme_attach_controller" 00:23:15.418 } 00:23:15.418 EOF 00:23:15.418 )") 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:15.418 "params": { 00:23:15.418 "name": "Nvme0", 00:23:15.418 "trtype": "tcp", 00:23:15.418 "traddr": "10.0.0.3", 00:23:15.418 "adrfam": "ipv4", 00:23:15.418 "trsvcid": "4420", 00:23:15.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:15.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:15.418 "hdgst": false, 00:23:15.418 "ddgst": false 00:23:15.418 }, 00:23:15.418 "method": "bdev_nvme_attach_controller" 00:23:15.418 }' 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:15.418 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:15.419 17:24:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:15.419 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:15.419 ... 00:23:15.419 fio-3.35 00:23:15.419 Starting 3 threads 00:23:21.991 00:23:21.991 filename0: (groupid=0, jobs=1): err= 0: pid=83274: Mon Nov 4 17:24:21 2024 00:23:21.991 read: IOPS=278, BW=34.9MiB/s (36.6MB/s)(174MiB/5001msec) 00:23:21.991 slat (nsec): min=6057, max=54413, avg=11011.14, stdev=6407.13 00:23:21.991 clat (usec): min=4033, max=14452, avg=10726.71, stdev=637.69 00:23:21.991 lat (usec): min=4041, max=14463, avg=10737.72, stdev=637.91 00:23:21.991 clat percentiles (usec): 00:23:21.991 | 1.00th=[ 9634], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10290], 00:23:21.991 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:23:21.991 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11600], 00:23:21.991 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14484], 99.95th=[14484], 00:23:21.991 | 99.99th=[14484] 00:23:21.991 bw ( KiB/s): min=34560, max=38400, per=33.25%, avg=35584.00, stdev=1214.31, samples=9 00:23:21.991 iops : min= 270, max= 300, avg=278.00, stdev= 9.49, samples=9 00:23:21.991 lat (msec) : 10=6.88%, 20=93.12% 00:23:21.991 cpu : usr=90.84%, sys=8.36%, ctx=106, majf=0, minf=0 00:23:21.991 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:21.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.991 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:21.991 filename0: (groupid=0, jobs=1): err= 0: pid=83275: Mon Nov 4 17:24:21 2024 00:23:21.991 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5005msec) 00:23:21.991 slat (nsec): min=3988, max=57117, avg=9961.38, stdev=5184.31 00:23:21.991 clat (usec): min=6079, max=17077, avg=10738.43, stdev=638.47 00:23:21.991 lat (usec): min=6086, max=17088, avg=10748.39, stdev=638.60 00:23:21.991 clat percentiles (usec): 00:23:21.991 | 1.00th=[ 9634], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10290], 00:23:21.991 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:23:21.991 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:23:21.991 | 99.00th=[12125], 99.50th=[12125], 99.90th=[17171], 99.95th=[17171], 00:23:21.991 | 99.99th=[17171] 00:23:21.991 bw ( KiB/s): min=34560, max=37632, per=33.29%, avg=35635.20, stdev=1036.72, samples=10 00:23:21.991 iops : min= 270, max= 294, avg=278.40, stdev= 8.10, samples=10 00:23:21.991 lat (msec) : 10=6.95%, 20=93.05% 00:23:21.991 cpu : usr=92.19%, sys=7.17%, ctx=40, majf=0, minf=0 00:23:21.991 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:21.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.991 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:21.991 filename0: (groupid=0, jobs=1): err= 0: pid=83276: Mon Nov 4 17:24:21 2024 00:23:21.991 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5005msec) 00:23:21.991 slat (nsec): min=6238, max=48823, avg=10376.42, stdev=5320.23 00:23:21.991 clat (usec): min=7207, max=16778, avg=10736.29, stdev=624.25 00:23:21.991 lat (usec): min=7213, max=16790, avg=10746.67, stdev=624.68 00:23:21.991 clat percentiles (usec): 00:23:21.991 | 1.00th=[ 9634], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10290], 00:23:21.991 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:23:21.991 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:23:21.991 | 99.00th=[12125], 99.50th=[12256], 99.90th=[16712], 99.95th=[16909], 00:23:21.991 | 99.99th=[16909] 00:23:21.991 bw ( KiB/s): min=33792, max=37707, per=33.30%, avg=35642.70, stdev=1113.42, samples=10 00:23:21.991 iops : min= 264, max= 294, avg=278.40, stdev= 8.58, samples=10 00:23:21.991 lat (msec) : 10=7.60%, 20=92.40% 00:23:21.991 cpu : usr=92.73%, sys=6.71%, ctx=55, majf=0, minf=0 00:23:21.991 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:21.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:21.991 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:21.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:21.991 00:23:21.991 Run status group 0 (all jobs): 00:23:21.991 READ: bw=105MiB/s (110MB/s), 34.8MiB/s-34.9MiB/s (36.5MB/s-36.6MB/s), io=523MiB (549MB), run=5001-5005msec 00:23:21.991 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 bdev_null0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 [2024-11-04 17:24:21.937426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 bdev_null1 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 bdev_null2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.992 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.992 { 00:23:21.992 "params": { 00:23:21.992 "name": "Nvme$subsystem", 00:23:21.992 "trtype": "$TEST_TRANSPORT", 00:23:21.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.992 "adrfam": "ipv4", 00:23:21.993 "trsvcid": "$NVMF_PORT", 00:23:21.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.993 "hdgst": ${hdgst:-false}, 00:23:21.993 "ddgst": ${ddgst:-false} 00:23:21.993 }, 00:23:21.993 "method": "bdev_nvme_attach_controller" 00:23:21.993 } 00:23:21.993 EOF 00:23:21.993 )") 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.993 { 00:23:21.993 "params": { 00:23:21.993 "name": "Nvme$subsystem", 00:23:21.993 "trtype": "$TEST_TRANSPORT", 00:23:21.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.993 "adrfam": "ipv4", 00:23:21.993 "trsvcid": "$NVMF_PORT", 00:23:21.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.993 "hdgst": ${hdgst:-false}, 00:23:21.993 "ddgst": ${ddgst:-false} 00:23:21.993 }, 00:23:21.993 "method": "bdev_nvme_attach_controller" 00:23:21.993 } 00:23:21.993 EOF 00:23:21.993 )") 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.993 { 00:23:21.993 "params": { 00:23:21.993 "name": "Nvme$subsystem", 00:23:21.993 "trtype": "$TEST_TRANSPORT", 00:23:21.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.993 "adrfam": "ipv4", 00:23:21.993 "trsvcid": "$NVMF_PORT", 00:23:21.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.993 "hdgst": ${hdgst:-false}, 00:23:21.993 "ddgst": ${ddgst:-false} 00:23:21.993 }, 00:23:21.993 "method": "bdev_nvme_attach_controller" 00:23:21.993 } 00:23:21.993 EOF 00:23:21.993 )") 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:21.993 "params": { 00:23:21.993 "name": "Nvme0", 00:23:21.993 "trtype": "tcp", 00:23:21.993 "traddr": "10.0.0.3", 00:23:21.993 "adrfam": "ipv4", 00:23:21.993 "trsvcid": "4420", 00:23:21.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:21.993 "hdgst": false, 00:23:21.993 "ddgst": false 00:23:21.993 }, 00:23:21.993 "method": "bdev_nvme_attach_controller" 00:23:21.993 },{ 00:23:21.993 "params": { 00:23:21.993 "name": "Nvme1", 00:23:21.993 "trtype": "tcp", 00:23:21.993 "traddr": "10.0.0.3", 00:23:21.993 "adrfam": "ipv4", 00:23:21.993 "trsvcid": "4420", 00:23:21.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.993 "hdgst": false, 00:23:21.993 "ddgst": false 00:23:21.993 }, 00:23:21.993 "method": "bdev_nvme_attach_controller" 00:23:21.993 },{ 00:23:21.993 "params": { 00:23:21.993 "name": "Nvme2", 00:23:21.993 "trtype": "tcp", 00:23:21.993 "traddr": "10.0.0.3", 00:23:21.993 "adrfam": "ipv4", 00:23:21.993 "trsvcid": "4420", 00:23:21.993 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.993 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:21.993 "hdgst": false, 00:23:21.993 "ddgst": false 00:23:21.993 }, 00:23:21.993 "method": "bdev_nvme_attach_controller" 00:23:21.993 }' 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:21.993 17:24:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:21.993 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:21.993 ... 00:23:21.993 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:21.993 ... 00:23:21.993 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:21.993 ... 00:23:21.993 fio-3.35 00:23:21.993 Starting 24 threads 00:23:34.201 00:23:34.201 filename0: (groupid=0, jobs=1): err= 0: pid=83371: Mon Nov 4 17:24:33 2024 00:23:34.201 read: IOPS=231, BW=924KiB/s (946kB/s)(9292KiB/10053msec) 00:23:34.201 slat (usec): min=3, max=4020, avg=15.47, stdev=93.23 00:23:34.201 clat (msec): min=3, max=140, avg=69.06, stdev=23.51 00:23:34.201 lat (msec): min=3, max=140, avg=69.07, stdev=23.51 00:23:34.201 clat percentiles (msec): 00:23:34.201 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 42], 20.00th=[ 50], 00:23:34.201 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:23:34.201 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 100], 95.00th=[ 108], 00:23:34.201 | 99.00th=[ 117], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 134], 00:23:34.201 | 99.99th=[ 142] 00:23:34.201 bw ( KiB/s): min= 654, max= 2048, per=4.16%, avg=922.70, stdev=287.39, samples=20 00:23:34.201 iops : min= 163, max= 512, avg=230.65, stdev=71.87, samples=20 00:23:34.201 lat (msec) : 4=0.60%, 10=1.03%, 20=2.50%, 50=17.22%, 100=69.22% 00:23:34.201 lat (msec) : 250=9.43% 00:23:34.201 cpu : usr=43.94%, sys=2.29%, ctx=1364, majf=0, minf=9 00:23:34.201 IO depths : 1=0.1%, 2=1.2%, 4=4.3%, 8=78.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:34.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.201 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.201 issued rwts: total=2323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.201 filename0: (groupid=0, jobs=1): err= 0: pid=83372: Mon Nov 4 17:24:33 2024 00:23:34.201 read: IOPS=235, BW=942KiB/s (965kB/s)(9424KiB/10005msec) 00:23:34.201 slat (usec): min=4, max=4024, avg=19.50, stdev=143.11 00:23:34.201 clat (msec): min=8, max=132, avg=67.84, stdev=20.85 00:23:34.201 lat (msec): min=8, max=132, avg=67.86, stdev=20.85 00:23:34.201 clat percentiles (msec): 00:23:34.201 | 1.00th=[ 20], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 48], 00:23:34.201 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:23:34.201 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 97], 95.00th=[ 108], 00:23:34.201 | 99.00th=[ 120], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:23:34.201 | 99.99th=[ 133] 00:23:34.201 bw ( KiB/s): min= 664, max= 1280, per=4.17%, avg=923.37, stdev=147.83, samples=19 00:23:34.201 iops : min= 166, max= 320, avg=230.84, stdev=36.96, samples=19 00:23:34.201 lat (msec) : 10=0.55%, 20=0.55%, 50=22.92%, 100=68.00%, 250=7.98% 00:23:34.201 cpu : usr=42.15%, sys=2.76%, ctx=1272, majf=0, minf=9 00:23:34.201 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:34.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.201 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.201 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.201 filename0: (groupid=0, jobs=1): err= 0: pid=83373: Mon Nov 4 17:24:33 2024 00:23:34.201 read: IOPS=229, BW=918KiB/s (940kB/s)(9188KiB/10013msec) 00:23:34.201 slat (usec): min=4, max=4071, avg=23.35, stdev=185.25 00:23:34.201 clat (msec): min=13, max=137, avg=69.62, stdev=20.23 00:23:34.201 lat (msec): min=13, max=137, avg=69.64, stdev=20.24 00:23:34.201 clat percentiles (msec): 00:23:34.201 | 1.00th=[ 30], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 51], 00:23:34.201 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 73], 00:23:34.201 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:23:34.201 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 130], 99.95th=[ 138], 00:23:34.201 | 99.99th=[ 138] 00:23:34.201 bw ( KiB/s): min= 640, max= 1282, per=4.13%, avg=915.05, stdev=150.02, samples=20 00:23:34.201 iops : min= 160, max= 320, avg=228.70, stdev=37.42, samples=20 00:23:34.201 lat (msec) : 20=0.26%, 50=20.37%, 100=70.31%, 250=9.06% 00:23:34.201 cpu : usr=40.39%, sys=2.24%, ctx=1422, majf=0, minf=9 00:23:34.201 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:34.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=88.2%, 8=10.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename0: (groupid=0, jobs=1): err= 0: pid=83374: Mon Nov 4 17:24:33 2024 00:23:34.202 read: IOPS=236, BW=946KiB/s (968kB/s)(9464KiB/10008msec) 00:23:34.202 slat (usec): min=4, max=8034, avg=24.22, stdev=274.17 00:23:34.202 clat (msec): min=8, max=128, avg=67.54, stdev=20.23 00:23:34.202 lat (msec): min=8, max=128, avg=67.56, stdev=20.24 00:23:34.202 clat percentiles (msec): 00:23:34.202 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 48], 00:23:34.202 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:23:34.202 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:23:34.202 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 129], 00:23:34.202 | 99.99th=[ 129] 00:23:34.202 bw ( KiB/s): min= 664, max= 1253, per=4.20%, avg=929.53, stdev=132.08, samples=19 00:23:34.202 iops : min= 166, max= 313, avg=232.37, stdev=32.99, samples=19 00:23:34.202 lat (msec) : 10=0.13%, 20=0.63%, 50=25.95%, 100=64.96%, 250=8.33% 00:23:34.202 cpu : usr=32.73%, sys=1.79%, ctx=928, majf=0, minf=9 00:23:34.202 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=80.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:34.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename0: (groupid=0, jobs=1): err= 0: pid=83375: Mon Nov 4 17:24:33 2024 00:23:34.202 read: IOPS=222, BW=891KiB/s (912kB/s)(8912KiB/10003msec) 00:23:34.202 slat (usec): min=4, max=8020, avg=20.23, stdev=225.54 00:23:34.202 clat (msec): min=4, max=157, avg=71.75, stdev=21.14 00:23:34.202 lat (msec): min=4, max=157, avg=71.77, stdev=21.14 00:23:34.202 clat percentiles (msec): 00:23:34.202 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:23:34.202 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:23:34.202 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 109], 00:23:34.202 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 157], 00:23:34.202 | 99.99th=[ 157] 00:23:34.202 bw ( KiB/s): min= 592, max= 1056, per=3.94%, avg=872.53, stdev=128.16, samples=19 00:23:34.202 iops : min= 148, max= 264, avg=218.11, stdev=32.01, samples=19 00:23:34.202 lat (msec) : 10=1.03%, 20=0.49%, 50=16.34%, 100=70.51%, 250=11.62% 00:23:34.202 cpu : usr=34.54%, sys=1.67%, ctx=1110, majf=0, minf=9 00:23:34.202 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:34.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename0: (groupid=0, jobs=1): err= 0: pid=83376: Mon Nov 4 17:24:33 2024 00:23:34.202 read: IOPS=243, BW=973KiB/s (997kB/s)(9744KiB/10010msec) 00:23:34.202 slat (usec): min=3, max=5037, avg=26.17, stdev=218.30 00:23:34.202 clat (msec): min=14, max=123, avg=65.62, stdev=20.35 00:23:34.202 lat (msec): min=15, max=123, avg=65.65, stdev=20.35 00:23:34.202 clat percentiles (msec): 00:23:34.202 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 43], 20.00th=[ 48], 00:23:34.202 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 71], 00:23:34.202 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 107], 00:23:34.202 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 124], 99.95th=[ 124], 00:23:34.202 | 99.99th=[ 124] 00:23:34.202 bw ( KiB/s): min= 664, max= 1576, per=4.38%, avg=970.15, stdev=181.29, samples=20 00:23:34.202 iops : min= 166, max= 394, avg=242.50, stdev=45.32, samples=20 00:23:34.202 lat (msec) : 20=1.15%, 50=24.18%, 100=67.73%, 250=6.94% 00:23:34.202 cpu : usr=43.60%, sys=2.48%, ctx=1596, majf=0, minf=9 00:23:34.202 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:34.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename0: (groupid=0, jobs=1): err= 0: pid=83377: Mon Nov 4 17:24:33 2024 00:23:34.202 read: IOPS=234, BW=938KiB/s (961kB/s)(9420KiB/10042msec) 00:23:34.202 slat (nsec): min=4737, max=35646, avg=13667.56, stdev=4540.33 00:23:34.202 clat (msec): min=8, max=143, avg=68.11, stdev=22.96 00:23:34.202 lat (msec): min=8, max=143, avg=68.13, stdev=22.96 00:23:34.202 clat percentiles (msec): 00:23:34.202 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 48], 00:23:34.202 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:23:34.202 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:23:34.202 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:23:34.202 | 99.99th=[ 144] 00:23:34.202 bw ( KiB/s): min= 640, max= 2008, per=4.22%, avg=935.50, stdev=275.54, samples=20 00:23:34.202 iops : min= 160, max= 502, avg=233.85, stdev=68.88, samples=20 00:23:34.202 lat (msec) : 10=0.59%, 20=3.06%, 50=19.87%, 100=67.18%, 250=9.30% 00:23:34.202 cpu : usr=33.97%, sys=1.73%, ctx=1009, majf=0, minf=9 00:23:34.202 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.1%, 16=16.6%, 32=0.0%, >=64=0.0% 00:23:34.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename0: (groupid=0, jobs=1): err= 0: pid=83378: Mon Nov 4 17:24:33 2024 00:23:34.202 read: IOPS=234, BW=939KiB/s (961kB/s)(9388KiB/10002msec) 00:23:34.202 slat (usec): min=4, max=9023, avg=28.45, stdev=319.40 00:23:34.202 clat (usec): min=1990, max=129204, avg=68012.55, stdev=20196.54 00:23:34.202 lat (usec): min=1998, max=129218, avg=68041.00, stdev=20194.85 00:23:34.202 clat percentiles (msec): 00:23:34.202 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 50], 00:23:34.202 | 30.00th=[ 56], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:23:34.202 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 107], 00:23:34.202 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 130], 00:23:34.202 | 99.99th=[ 130] 00:23:34.202 bw ( KiB/s): min= 664, max= 1136, per=4.14%, avg=917.89, stdev=128.61, samples=19 00:23:34.202 iops : min= 166, max= 284, avg=229.47, stdev=32.15, samples=19 00:23:34.202 lat (msec) : 2=0.09%, 4=0.17%, 10=0.64%, 20=0.64%, 50=21.01% 00:23:34.202 lat (msec) : 100=70.13%, 250=7.33% 00:23:34.202 cpu : usr=39.19%, sys=2.23%, ctx=1160, majf=0, minf=9 00:23:34.202 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:34.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename1: (groupid=0, jobs=1): err= 0: pid=83379: Mon Nov 4 17:24:33 2024 00:23:34.202 read: IOPS=226, BW=907KiB/s (929kB/s)(9120KiB/10054msec) 00:23:34.202 slat (usec): min=4, max=8031, avg=17.97, stdev=180.07 00:23:34.202 clat (msec): min=2, max=153, avg=70.42, stdev=25.70 00:23:34.202 lat (msec): min=3, max=153, avg=70.44, stdev=25.71 00:23:34.202 clat percentiles (msec): 00:23:34.202 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 52], 00:23:34.202 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:23:34.202 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 103], 95.00th=[ 114], 00:23:34.202 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 153], 00:23:34.202 | 99.99th=[ 153] 00:23:34.202 bw ( KiB/s): min= 512, max= 2253, per=4.08%, avg=904.65, stdev=341.16, samples=20 00:23:34.202 iops : min= 128, max= 563, avg=226.15, stdev=85.24, samples=20 00:23:34.202 lat (msec) : 4=0.61%, 10=2.89%, 20=1.32%, 50=13.64%, 100=70.53% 00:23:34.202 lat (msec) : 250=11.01% 00:23:34.202 cpu : usr=37.10%, sys=2.65%, ctx=1188, majf=0, minf=0 00:23:34.202 IO depths : 1=0.1%, 2=2.0%, 4=7.9%, 8=74.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:34.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=89.8%, 8=8.5%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename1: (groupid=0, jobs=1): err= 0: pid=83380: Mon Nov 4 17:24:33 2024 00:23:34.202 read: IOPS=200, BW=804KiB/s (823kB/s)(8056KiB/10020msec) 00:23:34.202 slat (usec): min=4, max=8024, avg=31.82, stdev=351.81 00:23:34.202 clat (msec): min=21, max=152, avg=79.36, stdev=21.98 00:23:34.202 lat (msec): min=21, max=152, avg=79.39, stdev=21.98 00:23:34.202 clat percentiles (msec): 00:23:34.202 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 62], 00:23:34.202 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 84], 00:23:34.202 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 117], 00:23:34.202 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 153], 00:23:34.202 | 99.99th=[ 153] 00:23:34.202 bw ( KiB/s): min= 512, max= 1168, per=3.62%, avg=801.90, stdev=164.63, samples=20 00:23:34.202 iops : min= 128, max= 292, avg=200.45, stdev=41.14, samples=20 00:23:34.202 lat (msec) : 50=10.13%, 100=72.79%, 250=17.08% 00:23:34.202 cpu : usr=34.33%, sys=2.18%, ctx=1064, majf=0, minf=9 00:23:34.202 IO depths : 1=0.1%, 2=4.0%, 4=16.1%, 8=65.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:23:34.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 complete : 0=0.0%, 4=91.9%, 8=4.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.202 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.202 filename1: (groupid=0, jobs=1): err= 0: pid=83381: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=229, BW=919KiB/s (941kB/s)(9224KiB/10034msec) 00:23:34.203 slat (nsec): min=6030, max=35449, avg=13849.20, stdev=4577.54 00:23:34.203 clat (msec): min=12, max=143, avg=69.49, stdev=21.10 00:23:34.203 lat (msec): min=12, max=143, avg=69.51, stdev=21.10 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 49], 00:23:34.203 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:23:34.203 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:23:34.203 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:23:34.203 | 99.99th=[ 144] 00:23:34.203 bw ( KiB/s): min= 608, max= 1544, per=4.13%, avg=915.85, stdev=187.28, samples=20 00:23:34.203 iops : min= 152, max= 386, avg=228.95, stdev=46.81, samples=20 00:23:34.203 lat (msec) : 20=0.09%, 50=23.03%, 100=67.35%, 250=9.54% 00:23:34.203 cpu : usr=31.35%, sys=1.88%, ctx=875, majf=0, minf=9 00:23:34.203 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:23:34.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.203 filename1: (groupid=0, jobs=1): err= 0: pid=83382: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=230, BW=923KiB/s (945kB/s)(9280KiB/10059msec) 00:23:34.203 slat (usec): min=3, max=4633, avg=15.58, stdev=97.60 00:23:34.203 clat (msec): min=4, max=143, avg=69.16, stdev=22.87 00:23:34.203 lat (msec): min=4, max=143, avg=69.18, stdev=22.87 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 44], 20.00th=[ 50], 00:23:34.203 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:23:34.203 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:23:34.203 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 142], 00:23:34.203 | 99.99th=[ 144] 00:23:34.203 bw ( KiB/s): min= 576, max= 1888, per=4.17%, avg=924.00, stdev=256.41, samples=20 00:23:34.203 iops : min= 144, max= 472, avg=231.00, stdev=64.10, samples=20 00:23:34.203 lat (msec) : 10=1.98%, 20=1.47%, 50=16.77%, 100=69.78%, 250=10.00% 00:23:34.203 cpu : usr=34.55%, sys=1.98%, ctx=1145, majf=0, minf=0 00:23:34.203 IO depths : 1=0.2%, 2=0.8%, 4=2.8%, 8=80.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:34.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.203 filename1: (groupid=0, jobs=1): err= 0: pid=83383: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=230, BW=922KiB/s (944kB/s)(9256KiB/10044msec) 00:23:34.203 slat (usec): min=6, max=8024, avg=19.45, stdev=214.27 00:23:34.203 clat (msec): min=10, max=132, avg=69.32, stdev=20.84 00:23:34.203 lat (msec): min=10, max=132, avg=69.34, stdev=20.84 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 21], 5.00th=[ 32], 10.00th=[ 47], 20.00th=[ 50], 00:23:34.203 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:23:34.203 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 97], 95.00th=[ 108], 00:23:34.203 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:23:34.203 | 99.99th=[ 133] 00:23:34.203 bw ( KiB/s): min= 688, max= 1672, per=4.15%, avg=919.10, stdev=208.17, samples=20 00:23:34.203 iops : min= 172, max= 418, avg=229.75, stdev=52.04, samples=20 00:23:34.203 lat (msec) : 20=0.78%, 50=19.88%, 100=70.87%, 250=8.47% 00:23:34.203 cpu : usr=32.45%, sys=1.73%, ctx=1006, majf=0, minf=9 00:23:34.203 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.8%, 16=16.7%, 32=0.0%, >=64=0.0% 00:23:34.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 issued rwts: total=2314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.203 filename1: (groupid=0, jobs=1): err= 0: pid=83384: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=238, BW=954KiB/s (977kB/s)(9564KiB/10022msec) 00:23:34.203 slat (usec): min=4, max=8053, avg=28.22, stdev=306.61 00:23:34.203 clat (msec): min=15, max=141, avg=66.91, stdev=20.90 00:23:34.203 lat (msec): min=15, max=141, avg=66.94, stdev=20.91 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 43], 20.00th=[ 48], 00:23:34.203 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:23:34.203 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:23:34.203 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 122], 99.95th=[ 136], 00:23:34.203 | 99.99th=[ 142] 00:23:34.203 bw ( KiB/s): min= 688, max= 1624, per=4.30%, avg=952.25, stdev=195.23, samples=20 00:23:34.203 iops : min= 172, max= 406, avg=238.05, stdev=48.80, samples=20 00:23:34.203 lat (msec) : 20=0.29%, 50=26.27%, 100=65.79%, 250=7.65% 00:23:34.203 cpu : usr=38.88%, sys=2.51%, ctx=1073, majf=0, minf=9 00:23:34.203 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:34.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 issued rwts: total=2391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.203 filename1: (groupid=0, jobs=1): err= 0: pid=83385: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=230, BW=922KiB/s (944kB/s)(9224KiB/10005msec) 00:23:34.203 slat (usec): min=4, max=12037, avg=28.49, stdev=333.70 00:23:34.203 clat (msec): min=13, max=119, avg=69.26, stdev=19.60 00:23:34.203 lat (msec): min=13, max=119, avg=69.29, stdev=19.61 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 51], 00:23:34.203 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 73], 00:23:34.203 | 70.00th=[ 78], 80.00th=[ 82], 90.00th=[ 99], 95.00th=[ 108], 00:23:34.203 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:23:34.203 | 99.99th=[ 121] 00:23:34.203 bw ( KiB/s): min= 592, max= 1145, per=4.12%, avg=912.89, stdev=138.09, samples=19 00:23:34.203 iops : min= 148, max= 286, avg=228.21, stdev=34.50, samples=19 00:23:34.203 lat (msec) : 20=0.48%, 50=19.90%, 100=70.73%, 250=8.89% 00:23:34.203 cpu : usr=43.90%, sys=2.66%, ctx=1238, majf=0, minf=9 00:23:34.203 IO depths : 1=0.1%, 2=0.8%, 4=2.9%, 8=80.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:34.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.203 filename1: (groupid=0, jobs=1): err= 0: pid=83386: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=240, BW=961KiB/s (984kB/s)(9640KiB/10036msec) 00:23:34.203 slat (usec): min=6, max=8041, avg=22.60, stdev=210.21 00:23:34.203 clat (msec): min=15, max=131, avg=66.51, stdev=21.27 00:23:34.203 lat (msec): min=15, max=131, avg=66.54, stdev=21.27 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 43], 20.00th=[ 48], 00:23:34.203 | 30.00th=[ 54], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:23:34.203 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 96], 95.00th=[ 108], 00:23:34.203 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:23:34.203 | 99.99th=[ 132] 00:23:34.203 bw ( KiB/s): min= 640, max= 1752, per=4.32%, avg=957.50, stdev=215.05, samples=20 00:23:34.203 iops : min= 160, max= 438, avg=239.35, stdev=53.76, samples=20 00:23:34.203 lat (msec) : 20=1.41%, 50=22.53%, 100=68.30%, 250=7.76% 00:23:34.203 cpu : usr=42.25%, sys=2.52%, ctx=1300, majf=0, minf=9 00:23:34.203 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:34.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 issued rwts: total=2410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.203 filename2: (groupid=0, jobs=1): err= 0: pid=83387: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=222, BW=889KiB/s (910kB/s)(8928KiB/10042msec) 00:23:34.203 slat (usec): min=4, max=9022, avg=24.53, stdev=306.28 00:23:34.203 clat (msec): min=18, max=153, avg=71.86, stdev=21.14 00:23:34.203 lat (msec): min=18, max=153, avg=71.88, stdev=21.15 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 58], 00:23:34.203 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:23:34.203 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:23:34.203 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 155], 00:23:34.203 | 99.99th=[ 155] 00:23:34.203 bw ( KiB/s): min= 656, max= 1536, per=4.00%, avg=886.30, stdev=178.96, samples=20 00:23:34.203 iops : min= 164, max= 384, avg=221.55, stdev=44.74, samples=20 00:23:34.203 lat (msec) : 20=1.25%, 50=15.95%, 100=73.21%, 250=9.59% 00:23:34.203 cpu : usr=33.63%, sys=2.00%, ctx=1004, majf=0, minf=9 00:23:34.203 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=76.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:34.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 complete : 0=0.0%, 4=89.3%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.203 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.203 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.203 filename2: (groupid=0, jobs=1): err= 0: pid=83388: Mon Nov 4 17:24:33 2024 00:23:34.203 read: IOPS=233, BW=935KiB/s (957kB/s)(9372KiB/10027msec) 00:23:34.203 slat (usec): min=4, max=8023, avg=22.44, stdev=251.81 00:23:34.203 clat (msec): min=17, max=143, avg=68.36, stdev=21.05 00:23:34.203 lat (msec): min=17, max=143, avg=68.38, stdev=21.06 00:23:34.203 clat percentiles (msec): 00:23:34.203 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 48], 00:23:34.203 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:23:34.203 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:23:34.204 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:23:34.204 | 99.99th=[ 144] 00:23:34.204 bw ( KiB/s): min= 648, max= 1512, per=4.21%, avg=932.75, stdev=178.82, samples=20 00:23:34.204 iops : min= 162, max= 378, avg=233.15, stdev=44.73, samples=20 00:23:34.204 lat (msec) : 20=0.60%, 50=25.31%, 100=65.09%, 250=9.01% 00:23:34.204 cpu : usr=32.61%, sys=1.60%, ctx=966, majf=0, minf=9 00:23:34.204 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:34.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.204 filename2: (groupid=0, jobs=1): err= 0: pid=83389: Mon Nov 4 17:24:33 2024 00:23:34.204 read: IOPS=225, BW=901KiB/s (923kB/s)(9016KiB/10004msec) 00:23:34.204 slat (usec): min=4, max=8028, avg=50.10, stdev=530.57 00:23:34.204 clat (msec): min=4, max=156, avg=70.77, stdev=23.49 00:23:34.204 lat (msec): min=4, max=156, avg=70.82, stdev=23.49 00:23:34.204 clat percentiles (msec): 00:23:34.204 | 1.00th=[ 11], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:23:34.204 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:23:34.204 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 113], 00:23:34.204 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 157], 00:23:34.204 | 99.99th=[ 157] 00:23:34.204 bw ( KiB/s): min= 512, max= 1072, per=3.96%, avg=876.21, stdev=163.85, samples=19 00:23:34.204 iops : min= 128, max= 268, avg=219.05, stdev=40.96, samples=19 00:23:34.204 lat (msec) : 10=0.98%, 20=0.58%, 50=23.11%, 100=62.51%, 250=12.82% 00:23:34.204 cpu : usr=32.29%, sys=1.62%, ctx=955, majf=0, minf=9 00:23:34.204 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:23:34.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.204 filename2: (groupid=0, jobs=1): err= 0: pid=83390: Mon Nov 4 17:24:33 2024 00:23:34.204 read: IOPS=236, BW=946KiB/s (968kB/s)(9468KiB/10011msec) 00:23:34.204 slat (usec): min=3, max=8042, avg=44.50, stdev=493.34 00:23:34.204 clat (msec): min=11, max=119, avg=67.48, stdev=20.26 00:23:34.204 lat (msec): min=11, max=119, avg=67.52, stdev=20.27 00:23:34.204 clat percentiles (msec): 00:23:34.204 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:23:34.204 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:23:34.204 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:23:34.204 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:23:34.204 | 99.99th=[ 121] 00:23:34.204 bw ( KiB/s): min= 664, max= 1349, per=4.23%, avg=936.68, stdev=148.08, samples=19 00:23:34.204 iops : min= 166, max= 337, avg=234.16, stdev=36.98, samples=19 00:23:34.204 lat (msec) : 20=0.80%, 50=23.62%, 100=68.44%, 250=7.14% 00:23:34.204 cpu : usr=31.50%, sys=1.72%, ctx=864, majf=0, minf=9 00:23:34.204 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:34.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 issued rwts: total=2367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.204 filename2: (groupid=0, jobs=1): err= 0: pid=83391: Mon Nov 4 17:24:33 2024 00:23:34.204 read: IOPS=235, BW=942KiB/s (965kB/s)(9468KiB/10051msec) 00:23:34.204 slat (usec): min=6, max=4026, avg=20.70, stdev=164.76 00:23:34.204 clat (msec): min=4, max=132, avg=67.74, stdev=23.92 00:23:34.204 lat (msec): min=4, max=132, avg=67.76, stdev=23.92 00:23:34.204 clat percentiles (msec): 00:23:34.204 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 41], 20.00th=[ 48], 00:23:34.204 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:23:34.204 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 109], 00:23:34.204 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 132], 99.95th=[ 132], 00:23:34.204 | 99.99th=[ 133] 00:23:34.204 bw ( KiB/s): min= 576, max= 2072, per=4.25%, avg=940.50, stdev=297.89, samples=20 00:23:34.204 iops : min= 144, max= 518, avg=235.10, stdev=74.46, samples=20 00:23:34.204 lat (msec) : 10=1.65%, 20=2.92%, 50=19.10%, 100=66.24%, 250=10.10% 00:23:34.204 cpu : usr=41.38%, sys=2.28%, ctx=1381, majf=0, minf=9 00:23:34.204 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:23:34.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 issued rwts: total=2367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.204 filename2: (groupid=0, jobs=1): err= 0: pid=83392: Mon Nov 4 17:24:33 2024 00:23:34.204 read: IOPS=229, BW=918KiB/s (940kB/s)(9192KiB/10017msec) 00:23:34.204 slat (usec): min=5, max=12023, avg=32.28, stdev=357.02 00:23:34.204 clat (msec): min=17, max=125, avg=69.55, stdev=18.78 00:23:34.204 lat (msec): min=17, max=125, avg=69.58, stdev=18.77 00:23:34.204 clat percentiles (msec): 00:23:34.204 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 51], 00:23:34.204 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:23:34.204 | 70.00th=[ 78], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:23:34.204 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 122], 99.95th=[ 122], 00:23:34.204 | 99.99th=[ 126] 00:23:34.204 bw ( KiB/s): min= 712, max= 1152, per=4.12%, avg=912.70, stdev=119.58, samples=20 00:23:34.204 iops : min= 178, max= 288, avg=228.15, stdev=29.90, samples=20 00:23:34.204 lat (msec) : 20=0.09%, 50=19.36%, 100=72.50%, 250=8.05% 00:23:34.204 cpu : usr=40.89%, sys=2.09%, ctx=1193, majf=0, minf=9 00:23:34.204 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:34.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 issued rwts: total=2298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.204 filename2: (groupid=0, jobs=1): err= 0: pid=83393: Mon Nov 4 17:24:33 2024 00:23:34.204 read: IOPS=241, BW=964KiB/s (987kB/s)(9652KiB/10011msec) 00:23:34.204 slat (usec): min=3, max=8028, avg=16.85, stdev=163.22 00:23:34.204 clat (msec): min=11, max=123, avg=66.27, stdev=20.34 00:23:34.204 lat (msec): min=11, max=123, avg=66.29, stdev=20.34 00:23:34.204 clat percentiles (msec): 00:23:34.204 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:23:34.204 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:23:34.204 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:23:34.204 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:23:34.204 | 99.99th=[ 125] 00:23:34.204 bw ( KiB/s): min= 664, max= 1424, per=4.34%, avg=961.25, stdev=160.09, samples=20 00:23:34.204 iops : min= 166, max= 356, avg=240.25, stdev=40.01, samples=20 00:23:34.204 lat (msec) : 20=0.58%, 50=28.97%, 100=63.32%, 250=7.13% 00:23:34.204 cpu : usr=31.53%, sys=1.64%, ctx=869, majf=0, minf=9 00:23:34.204 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:34.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.204 issued rwts: total=2413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.204 filename2: (groupid=0, jobs=1): err= 0: pid=83394: Mon Nov 4 17:24:33 2024 00:23:34.204 read: IOPS=235, BW=943KiB/s (966kB/s)(9440KiB/10010msec) 00:23:34.204 slat (usec): min=3, max=8025, avg=21.78, stdev=202.02 00:23:34.204 clat (msec): min=10, max=123, avg=67.72, stdev=19.71 00:23:34.204 lat (msec): min=10, max=123, avg=67.75, stdev=19.71 00:23:34.204 clat percentiles (msec): 00:23:34.204 | 1.00th=[ 31], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:23:34.204 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:23:34.204 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:23:34.204 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 124], 99.95th=[ 124], 00:23:34.204 | 99.99th=[ 124] 00:23:34.204 bw ( KiB/s): min= 664, max= 1264, per=4.25%, avg=940.45, stdev=151.02, samples=20 00:23:34.204 iops : min= 166, max= 316, avg=235.05, stdev=37.75, samples=20 00:23:34.204 lat (msec) : 20=0.42%, 50=25.30%, 100=66.48%, 250=7.80% 00:23:34.205 cpu : usr=37.13%, sys=2.31%, ctx=1121, majf=0, minf=9 00:23:34.205 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:34.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.205 complete : 0=0.0%, 4=87.7%, 8=11.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.205 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:34.205 00:23:34.205 Run status group 0 (all jobs): 00:23:34.205 READ: bw=21.6MiB/s (22.7MB/s), 804KiB/s-973KiB/s (823kB/s-997kB/s), io=218MiB (228MB), run=10002-10059msec 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 bdev_null0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 [2024-11-04 17:24:33.358262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 bdev_null1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:34.205 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.206 { 00:23:34.206 "params": { 00:23:34.206 "name": "Nvme$subsystem", 00:23:34.206 "trtype": "$TEST_TRANSPORT", 00:23:34.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.206 "adrfam": "ipv4", 00:23:34.206 "trsvcid": "$NVMF_PORT", 00:23:34.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.206 "hdgst": ${hdgst:-false}, 00:23:34.206 "ddgst": ${ddgst:-false} 00:23:34.206 }, 00:23:34.206 "method": "bdev_nvme_attach_controller" 00:23:34.206 } 00:23:34.206 EOF 00:23:34.206 )") 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.206 { 00:23:34.206 "params": { 00:23:34.206 "name": "Nvme$subsystem", 00:23:34.206 "trtype": "$TEST_TRANSPORT", 00:23:34.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.206 "adrfam": "ipv4", 00:23:34.206 "trsvcid": "$NVMF_PORT", 00:23:34.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.206 "hdgst": ${hdgst:-false}, 00:23:34.206 "ddgst": ${ddgst:-false} 00:23:34.206 }, 00:23:34.206 "method": "bdev_nvme_attach_controller" 00:23:34.206 } 00:23:34.206 EOF 00:23:34.206 )") 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:34.206 "params": { 00:23:34.206 "name": "Nvme0", 00:23:34.206 "trtype": "tcp", 00:23:34.206 "traddr": "10.0.0.3", 00:23:34.206 "adrfam": "ipv4", 00:23:34.206 "trsvcid": "4420", 00:23:34.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:34.206 "hdgst": false, 00:23:34.206 "ddgst": false 00:23:34.206 }, 00:23:34.206 "method": "bdev_nvme_attach_controller" 00:23:34.206 },{ 00:23:34.206 "params": { 00:23:34.206 "name": "Nvme1", 00:23:34.206 "trtype": "tcp", 00:23:34.206 "traddr": "10.0.0.3", 00:23:34.206 "adrfam": "ipv4", 00:23:34.206 "trsvcid": "4420", 00:23:34.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.206 "hdgst": false, 00:23:34.206 "ddgst": false 00:23:34.206 }, 00:23:34.206 "method": "bdev_nvme_attach_controller" 00:23:34.206 }' 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.206 17:24:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.206 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:34.206 ... 00:23:34.206 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:34.206 ... 00:23:34.206 fio-3.35 00:23:34.206 Starting 4 threads 00:23:39.480 00:23:39.480 filename0: (groupid=0, jobs=1): err= 0: pid=83535: Mon Nov 4 17:24:39 2024 00:23:39.480 read: IOPS=2491, BW=19.5MiB/s (20.4MB/s)(97.4MiB/5002msec) 00:23:39.480 slat (nsec): min=6627, max=69794, avg=10291.40, stdev=4264.83 00:23:39.480 clat (usec): min=600, max=6908, avg=3181.44, stdev=1043.05 00:23:39.480 lat (usec): min=608, max=6923, avg=3191.74, stdev=1043.36 00:23:39.480 clat percentiles (usec): 00:23:39.480 | 1.00th=[ 1237], 5.00th=[ 1336], 10.00th=[ 1385], 20.00th=[ 1516], 00:23:39.480 | 30.00th=[ 2868], 40.00th=[ 3195], 50.00th=[ 3654], 60.00th=[ 3851], 00:23:39.480 | 70.00th=[ 3916], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4228], 00:23:39.480 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 4883], 99.95th=[ 5014], 00:23:39.480 | 99.99th=[ 5407] 00:23:39.480 bw ( KiB/s): min=18048, max=21792, per=30.65%, avg=20394.67, stdev=1252.91, samples=9 00:23:39.480 iops : min= 2256, max= 2724, avg=2549.33, stdev=156.61, samples=9 00:23:39.480 lat (usec) : 750=0.26%, 1000=0.14% 00:23:39.480 lat (msec) : 2=20.98%, 4=56.34%, 10=22.28% 00:23:39.480 cpu : usr=91.16%, sys=7.78%, ctx=5, majf=0, minf=0 00:23:39.480 IO depths : 1=0.1%, 2=5.0%, 4=61.3%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.480 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.480 issued rwts: total=12464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:39.480 filename0: (groupid=0, jobs=1): err= 0: pid=83536: Mon Nov 4 17:24:39 2024 00:23:39.480 read: IOPS=1935, BW=15.1MiB/s (15.9MB/s)(75.6MiB/5001msec) 00:23:39.480 slat (usec): min=4, max=136, avg=15.34, stdev= 5.38 00:23:39.480 clat (usec): min=1256, max=7186, avg=4073.07, stdev=305.77 00:23:39.480 lat (usec): min=1267, max=7194, avg=4088.41, stdev=306.18 00:23:39.480 clat percentiles (usec): 00:23:39.480 | 1.00th=[ 3294], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:23:39.480 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4113], 00:23:39.480 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:23:39.480 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5538], 99.95th=[ 5735], 00:23:39.480 | 99.99th=[ 7177] 00:23:39.480 bw ( KiB/s): min=14592, max=16400, per=23.20%, avg=15436.22, stdev=545.71, samples=9 00:23:39.480 iops : min= 1824, max= 2050, avg=1929.44, stdev=68.24, samples=9 00:23:39.480 lat (msec) : 2=0.15%, 4=38.58%, 10=61.26% 00:23:39.480 cpu : usr=90.70%, sys=8.06%, ctx=24, majf=0, minf=0 00:23:39.480 IO depths : 1=0.1%, 2=24.6%, 4=50.3%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.480 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.480 issued rwts: total=9681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:39.480 filename1: (groupid=0, jobs=1): err= 0: pid=83537: Mon Nov 4 17:24:39 2024 00:23:39.480 read: IOPS=1921, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5002msec) 00:23:39.480 slat (nsec): min=3697, max=84089, avg=15462.59, stdev=5293.86 00:23:39.480 clat (usec): min=1212, max=6548, avg=4102.00, stdev=316.24 00:23:39.480 lat (usec): min=1227, max=6559, avg=4117.47, stdev=316.52 00:23:39.480 clat percentiles (usec): 00:23:39.480 | 1.00th=[ 3556], 5.00th=[ 3687], 10.00th=[ 3818], 20.00th=[ 3916], 00:23:39.480 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4113], 00:23:39.480 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4621], 00:23:39.480 | 99.00th=[ 5407], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6325], 00:23:39.480 | 99.99th=[ 6521] 00:23:39.480 bw ( KiB/s): min=14592, max=15872, per=23.02%, avg=15317.33, stdev=409.80, samples=9 00:23:39.480 iops : min= 1824, max= 1984, avg=1914.67, stdev=51.22, samples=9 00:23:39.480 lat (msec) : 2=0.01%, 4=38.14%, 10=61.85% 00:23:39.480 cpu : usr=91.16%, sys=7.74%, ctx=39, majf=0, minf=1 00:23:39.480 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.480 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.480 issued rwts: total=9609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.480 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:39.480 filename1: (groupid=0, jobs=1): err= 0: pid=83538: Mon Nov 4 17:24:39 2024 00:23:39.480 read: IOPS=1969, BW=15.4MiB/s (16.1MB/s)(77.0MiB/5002msec) 00:23:39.480 slat (nsec): min=4097, max=71239, avg=14787.24, stdev=5185.32 00:23:39.480 clat (usec): min=1047, max=6748, avg=4005.30, stdev=403.99 00:23:39.480 lat (usec): min=1056, max=6761, avg=4020.08, stdev=404.34 00:23:39.480 clat percentiles (usec): 00:23:39.480 | 1.00th=[ 2278], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 3884], 00:23:39.481 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4080], 00:23:39.481 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:23:39.481 | 99.00th=[ 4752], 99.50th=[ 4817], 99.90th=[ 5604], 99.95th=[ 5604], 00:23:39.481 | 99.99th=[ 6718] 00:23:39.481 bw ( KiB/s): min=14848, max=16720, per=23.61%, avg=15712.00, stdev=629.31, samples=9 00:23:39.481 iops : min= 1856, max= 2090, avg=1965.56, stdev=77.46, samples=9 00:23:39.481 lat (msec) : 2=0.48%, 4=41.59%, 10=57.94% 00:23:39.481 cpu : usr=89.50%, sys=9.54%, ctx=6, majf=0, minf=0 00:23:39.481 IO depths : 1=0.1%, 2=23.1%, 4=51.2%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.481 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.481 issued rwts: total=9852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.481 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:39.481 00:23:39.481 Run status group 0 (all jobs): 00:23:39.481 READ: bw=65.0MiB/s (68.1MB/s), 15.0MiB/s-19.5MiB/s (15.7MB/s-20.4MB/s), io=325MiB (341MB), run=5001-5002msec 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 ************************************ 00:23:39.481 END TEST fio_dif_rand_params 00:23:39.481 ************************************ 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 00:23:39.481 real 0m23.627s 00:23:39.481 user 2m3.233s 00:23:39.481 sys 0m8.721s 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 17:24:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:39.481 17:24:39 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:39.481 17:24:39 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 ************************************ 00:23:39.481 START TEST fio_dif_digest 00:23:39.481 ************************************ 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 bdev_null0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:39.481 [2024-11-04 17:24:39.606017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:39.481 { 00:23:39.481 "params": { 00:23:39.481 "name": "Nvme$subsystem", 00:23:39.481 "trtype": "$TEST_TRANSPORT", 00:23:39.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.481 "adrfam": "ipv4", 00:23:39.481 "trsvcid": "$NVMF_PORT", 00:23:39.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.481 "hdgst": ${hdgst:-false}, 00:23:39.481 "ddgst": ${ddgst:-false} 00:23:39.481 }, 00:23:39.481 "method": "bdev_nvme_attach_controller" 00:23:39.481 } 00:23:39.481 EOF 00:23:39.481 )") 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:23:39.481 17:24:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:39.482 "params": { 00:23:39.482 "name": "Nvme0", 00:23:39.482 "trtype": "tcp", 00:23:39.482 "traddr": "10.0.0.3", 00:23:39.482 "adrfam": "ipv4", 00:23:39.482 "trsvcid": "4420", 00:23:39.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:39.482 "hdgst": true, 00:23:39.482 "ddgst": true 00:23:39.482 }, 00:23:39.482 "method": "bdev_nvme_attach_controller" 00:23:39.482 }' 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:39.482 17:24:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:39.482 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:39.482 ... 00:23:39.482 fio-3.35 00:23:39.482 Starting 3 threads 00:23:51.689 00:23:51.689 filename0: (groupid=0, jobs=1): err= 0: pid=83648: Mon Nov 4 17:24:50 2024 00:23:51.689 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(299MiB/10002msec) 00:23:51.689 slat (nsec): min=5810, max=63610, avg=10030.35, stdev=4651.17 00:23:51.689 clat (usec): min=6476, max=14228, avg=12540.54, stdev=555.63 00:23:51.689 lat (usec): min=6482, max=14246, avg=12550.57, stdev=556.23 00:23:51.689 clat percentiles (usec): 00:23:51.689 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:23:51.689 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:23:51.689 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13173], 95.00th=[13435], 00:23:51.689 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14222], 99.95th=[14222], 00:23:51.689 | 99.99th=[14222] 00:23:51.689 bw ( KiB/s): min=29184, max=31488, per=33.38%, avg=30598.74, stdev=689.93, samples=19 00:23:51.689 iops : min= 228, max= 246, avg=239.05, stdev= 5.39, samples=19 00:23:51.689 lat (msec) : 10=0.13%, 20=99.87% 00:23:51.689 cpu : usr=90.86%, sys=8.46%, ctx=14, majf=0, minf=0 00:23:51.689 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.689 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.689 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:51.689 filename0: (groupid=0, jobs=1): err= 0: pid=83649: Mon Nov 4 17:24:50 2024 00:23:51.689 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(299MiB/10005msec) 00:23:51.689 slat (nsec): min=6801, max=56597, avg=9968.32, stdev=4415.82 00:23:51.689 clat (usec): min=8774, max=17273, avg=12543.75, stdev=563.01 00:23:51.689 lat (usec): min=8781, max=17298, avg=12553.72, stdev=563.71 00:23:51.689 clat percentiles (usec): 00:23:51.689 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:23:51.689 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:23:51.689 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13566], 00:23:51.689 | 99.00th=[13960], 99.50th=[14222], 99.90th=[17171], 99.95th=[17171], 00:23:51.689 | 99.99th=[17171] 00:23:51.689 bw ( KiB/s): min=29184, max=31488, per=33.34%, avg=30561.37, stdev=698.58, samples=19 00:23:51.689 iops : min= 228, max= 246, avg=238.74, stdev= 5.51, samples=19 00:23:51.689 lat (msec) : 10=0.25%, 20=99.75% 00:23:51.689 cpu : usr=90.80%, sys=8.58%, ctx=17, majf=0, minf=0 00:23:51.689 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.689 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.689 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:51.689 filename0: (groupid=0, jobs=1): err= 0: pid=83650: Mon Nov 4 17:24:50 2024 00:23:51.689 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(299MiB/10005msec) 00:23:51.689 slat (nsec): min=5222, max=82576, avg=9685.98, stdev=4736.91 00:23:51.689 clat (usec): min=8853, max=15855, avg=12545.46, stdev=544.00 00:23:51.689 lat (usec): min=8861, max=15870, avg=12555.14, stdev=544.43 00:23:51.689 clat percentiles (usec): 00:23:51.689 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12125], 00:23:51.689 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:23:51.689 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13435], 00:23:51.689 | 99.00th=[13960], 99.50th=[14091], 99.90th=[15795], 99.95th=[15795], 00:23:51.689 | 99.99th=[15795] 00:23:51.689 bw ( KiB/s): min=28416, max=31488, per=33.34%, avg=30558.32, stdev=832.65, samples=19 00:23:51.689 iops : min= 222, max= 246, avg=238.74, stdev= 6.51, samples=19 00:23:51.689 lat (msec) : 10=0.13%, 20=99.87% 00:23:51.689 cpu : usr=90.72%, sys=8.69%, ctx=12, majf=0, minf=0 00:23:51.689 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.689 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.689 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:51.689 00:23:51.689 Run status group 0 (all jobs): 00:23:51.689 READ: bw=89.5MiB/s (93.9MB/s), 29.8MiB/s-29.8MiB/s (31.3MB/s-31.3MB/s), io=896MiB (939MB), run=10002-10005msec 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:51.689 ************************************ 00:23:51.689 END TEST fio_dif_digest 00:23:51.689 ************************************ 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.689 00:23:51.689 real 0m11.101s 00:23:51.689 user 0m27.954s 00:23:51.689 sys 0m2.851s 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:51.689 17:24:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:51.689 17:24:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:51.689 17:24:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.689 rmmod nvme_tcp 00:23:51.689 rmmod nvme_fabrics 00:23:51.689 rmmod nvme_keyring 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82897 ']' 00:23:51.689 17:24:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82897 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 82897 ']' 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 82897 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82897 00:23:51.689 killing process with pid 82897 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82897' 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@971 -- # kill 82897 00:23:51.689 17:24:50 nvmf_dif -- common/autotest_common.sh@976 -- # wait 82897 00:23:51.689 17:24:51 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:51.689 17:24:51 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:51.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:51.689 Waiting for block devices as requested 00:23:51.689 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:51.690 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.690 17:24:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:51.690 17:24:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.690 17:24:51 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:23:51.690 00:23:51.690 real 0m59.787s 00:23:51.690 user 3m47.241s 00:23:51.690 sys 0m20.023s 00:23:51.690 ************************************ 00:23:51.690 END TEST nvmf_dif 00:23:51.690 ************************************ 00:23:51.690 17:24:51 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:51.690 17:24:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:51.690 17:24:51 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:51.690 17:24:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:51.690 17:24:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:51.690 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:23:51.690 ************************************ 00:23:51.690 START TEST nvmf_abort_qd_sizes 00:23:51.690 ************************************ 00:23:51.690 17:24:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:51.690 * Looking for test storage... 00:23:51.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.690 --rc genhtml_branch_coverage=1 00:23:51.690 --rc genhtml_function_coverage=1 00:23:51.690 --rc genhtml_legend=1 00:23:51.690 --rc geninfo_all_blocks=1 00:23:51.690 --rc geninfo_unexecuted_blocks=1 00:23:51.690 00:23:51.690 ' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.690 --rc genhtml_branch_coverage=1 00:23:51.690 --rc genhtml_function_coverage=1 00:23:51.690 --rc genhtml_legend=1 00:23:51.690 --rc geninfo_all_blocks=1 00:23:51.690 --rc geninfo_unexecuted_blocks=1 00:23:51.690 00:23:51.690 ' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.690 --rc genhtml_branch_coverage=1 00:23:51.690 --rc genhtml_function_coverage=1 00:23:51.690 --rc genhtml_legend=1 00:23:51.690 --rc geninfo_all_blocks=1 00:23:51.690 --rc geninfo_unexecuted_blocks=1 00:23:51.690 00:23:51.690 ' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:51.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.690 --rc genhtml_branch_coverage=1 00:23:51.690 --rc genhtml_function_coverage=1 00:23:51.690 --rc genhtml_legend=1 00:23:51.690 --rc geninfo_all_blocks=1 00:23:51.690 --rc geninfo_unexecuted_blocks=1 00:23:51.690 00:23:51.690 ' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.690 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:51.691 Cannot find device "nvmf_init_br" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:51.691 Cannot find device "nvmf_init_br2" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:51.691 Cannot find device "nvmf_tgt_br" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.691 Cannot find device "nvmf_tgt_br2" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:51.691 Cannot find device "nvmf_init_br" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:51.691 Cannot find device "nvmf_init_br2" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:51.691 Cannot find device "nvmf_tgt_br" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:51.691 Cannot find device "nvmf_tgt_br2" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:51.691 Cannot find device "nvmf_br" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:51.691 Cannot find device "nvmf_init_if" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:51.691 Cannot find device "nvmf_init_if2" 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:51.691 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:51.692 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:51.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:51.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:51.951 00:23:51.951 --- 10.0.0.3 ping statistics --- 00:23:51.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.951 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:51.951 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:51.951 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:23:51.951 00:23:51.951 --- 10.0.0.4 ping statistics --- 00:23:51.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.951 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:51.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:51.951 00:23:51.951 --- 10.0.0.1 ping statistics --- 00:23:51.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.951 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:51.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:23:51.951 00:23:51.951 --- 10.0.0.2 ping statistics --- 00:23:51.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.951 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:51.951 17:24:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:52.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:52.519 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.778 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84296 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84296 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 84296 ']' 00:23:52.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.778 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:52.779 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.779 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.779 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.779 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:52.779 [2024-11-04 17:24:53.502131] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:23:52.779 [2024-11-04 17:24:53.502254] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.038 [2024-11-04 17:24:53.656984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.038 [2024-11-04 17:24:53.717076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.038 [2024-11-04 17:24:53.717373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.038 [2024-11-04 17:24:53.717399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.038 [2024-11-04 17:24:53.717410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.038 [2024-11-04 17:24:53.717420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.038 [2024-11-04 17:24:53.718641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.038 [2024-11-04 17:24:53.718776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.038 [2024-11-04 17:24:53.718866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.038 [2024-11-04 17:24:53.718867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.038 [2024-11-04 17:24:53.777613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:53.298 17:24:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:53.298 ************************************ 00:23:53.298 START TEST spdk_target_abort 00:23:53.298 ************************************ 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:53.298 spdk_targetn1 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.298 17:24:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:53.298 [2024-11-04 17:24:53.993355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:53.298 [2024-11-04 17:24:54.027334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:53.298 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:53.299 17:24:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:56.586 Initializing NVMe Controllers 00:23:56.586 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:56.586 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:56.586 Initialization complete. Launching workers. 00:23:56.586 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9770, failed: 0 00:23:56.586 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1050, failed to submit 8720 00:23:56.586 success 662, unsuccessful 388, failed 0 00:23:56.586 17:24:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:56.586 17:24:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:59.877 Initializing NVMe Controllers 00:23:59.877 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:59.877 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:59.877 Initialization complete. Launching workers. 00:23:59.877 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9029, failed: 0 00:23:59.877 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1153, failed to submit 7876 00:23:59.877 success 420, unsuccessful 733, failed 0 00:24:00.136 17:25:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:00.136 17:25:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:03.424 Initializing NVMe Controllers 00:24:03.424 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:03.424 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:03.424 Initialization complete. Launching workers. 00:24:03.424 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30859, failed: 0 00:24:03.424 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2247, failed to submit 28612 00:24:03.424 success 501, unsuccessful 1746, failed 0 00:24:03.424 17:25:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:03.424 17:25:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.424 17:25:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:03.424 17:25:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.424 17:25:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:03.424 17:25:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.424 17:25:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:03.683 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84296 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 84296 ']' 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 84296 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84296 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:03.684 killing process with pid 84296 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84296' 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 84296 00:24:03.684 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 84296 00:24:03.942 ************************************ 00:24:03.942 END TEST spdk_target_abort 00:24:03.942 00:24:03.942 real 0m10.724s 00:24:03.942 user 0m40.896s 00:24:03.942 sys 0m1.976s 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:03.942 ************************************ 00:24:03.942 17:25:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:03.942 17:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:03.942 17:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:03.942 17:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:03.942 ************************************ 00:24:03.942 START TEST kernel_target_abort 00:24:03.942 ************************************ 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:03.942 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:04.201 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:04.201 17:25:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:04.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:04.461 Waiting for block devices as requested 00:24:04.461 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.461 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:04.720 No valid GPT data, bailing 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:04.720 No valid GPT data, bailing 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:04.720 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:04.720 No valid GPT data, bailing 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:04.980 No valid GPT data, bailing 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 --hostid=8c073979-9b92-4972-b56b-796474446288 -a 10.0.0.1 -t tcp -s 4420 00:24:04.980 00:24:04.980 Discovery Log Number of Records 2, Generation counter 2 00:24:04.980 =====Discovery Log Entry 0====== 00:24:04.980 trtype: tcp 00:24:04.980 adrfam: ipv4 00:24:04.980 subtype: current discovery subsystem 00:24:04.980 treq: not specified, sq flow control disable supported 00:24:04.980 portid: 1 00:24:04.980 trsvcid: 4420 00:24:04.980 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:04.980 traddr: 10.0.0.1 00:24:04.980 eflags: none 00:24:04.980 sectype: none 00:24:04.980 =====Discovery Log Entry 1====== 00:24:04.980 trtype: tcp 00:24:04.980 adrfam: ipv4 00:24:04.980 subtype: nvme subsystem 00:24:04.980 treq: not specified, sq flow control disable supported 00:24:04.980 portid: 1 00:24:04.980 trsvcid: 4420 00:24:04.980 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:04.980 traddr: 10.0.0.1 00:24:04.980 eflags: none 00:24:04.980 sectype: none 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:04.980 17:25:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:08.270 Initializing NVMe Controllers 00:24:08.270 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:08.270 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:08.270 Initialization complete. Launching workers. 00:24:08.271 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30236, failed: 0 00:24:08.271 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30236, failed to submit 0 00:24:08.271 success 0, unsuccessful 30236, failed 0 00:24:08.271 17:25:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:08.271 17:25:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:11.564 Initializing NVMe Controllers 00:24:11.564 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:11.564 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:11.564 Initialization complete. Launching workers. 00:24:11.564 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66584, failed: 0 00:24:11.564 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28253, failed to submit 38331 00:24:11.564 success 0, unsuccessful 28253, failed 0 00:24:11.564 17:25:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:11.564 17:25:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:14.883 Initializing NVMe Controllers 00:24:14.884 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:14.884 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:14.884 Initialization complete. Launching workers. 00:24:14.884 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75693, failed: 0 00:24:14.884 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18874, failed to submit 56819 00:24:14.884 success 0, unsuccessful 18874, failed 0 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:14.884 17:25:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:15.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.049 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:17.049 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:17.049 ************************************ 00:24:17.049 END TEST kernel_target_abort 00:24:17.049 ************************************ 00:24:17.050 00:24:17.050 real 0m12.893s 00:24:17.050 user 0m5.831s 00:24:17.050 sys 0m4.480s 00:24:17.050 17:25:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.050 17:25:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.050 rmmod nvme_tcp 00:24:17.050 rmmod nvme_fabrics 00:24:17.050 rmmod nvme_keyring 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84296 ']' 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84296 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 84296 ']' 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 84296 00:24:17.050 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84296) - No such process 00:24:17.050 Process with pid 84296 is not found 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 84296 is not found' 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:17.050 17:25:17 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:17.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.308 Waiting for block devices as requested 00:24:17.567 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:17.567 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:17.567 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.567 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:17.568 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:17.827 00:24:17.827 real 0m26.595s 00:24:17.827 user 0m47.824s 00:24:17.827 sys 0m7.794s 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:17.827 17:25:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:17.827 ************************************ 00:24:17.827 END TEST nvmf_abort_qd_sizes 00:24:17.827 ************************************ 00:24:17.827 17:25:18 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:17.827 17:25:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:17.827 17:25:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:17.827 17:25:18 -- common/autotest_common.sh@10 -- # set +x 00:24:17.827 ************************************ 00:24:17.827 START TEST keyring_file 00:24:17.827 ************************************ 00:24:17.827 17:25:18 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:18.087 * Looking for test storage... 00:24:18.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.087 --rc genhtml_branch_coverage=1 00:24:18.087 --rc genhtml_function_coverage=1 00:24:18.087 --rc genhtml_legend=1 00:24:18.087 --rc geninfo_all_blocks=1 00:24:18.087 --rc geninfo_unexecuted_blocks=1 00:24:18.087 00:24:18.087 ' 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.087 --rc genhtml_branch_coverage=1 00:24:18.087 --rc genhtml_function_coverage=1 00:24:18.087 --rc genhtml_legend=1 00:24:18.087 --rc geninfo_all_blocks=1 00:24:18.087 --rc geninfo_unexecuted_blocks=1 00:24:18.087 00:24:18.087 ' 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.087 --rc genhtml_branch_coverage=1 00:24:18.087 --rc genhtml_function_coverage=1 00:24:18.087 --rc genhtml_legend=1 00:24:18.087 --rc geninfo_all_blocks=1 00:24:18.087 --rc geninfo_unexecuted_blocks=1 00:24:18.087 00:24:18.087 ' 00:24:18.087 17:25:18 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:18.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.087 --rc genhtml_branch_coverage=1 00:24:18.087 --rc genhtml_function_coverage=1 00:24:18.087 --rc genhtml_legend=1 00:24:18.087 --rc geninfo_all_blocks=1 00:24:18.087 --rc geninfo_unexecuted_blocks=1 00:24:18.087 00:24:18.087 ' 00:24:18.087 17:25:18 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:18.087 17:25:18 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.087 17:25:18 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.087 17:25:18 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.087 17:25:18 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.087 17:25:18 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.087 17:25:18 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:18.087 17:25:18 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.087 17:25:18 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.088 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VAGRHyB3Re 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VAGRHyB3Re 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VAGRHyB3Re 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VAGRHyB3Re 00:24:18.088 17:25:18 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6RMs33vlw4 00:24:18.088 17:25:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:18.088 17:25:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:18.347 17:25:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6RMs33vlw4 00:24:18.347 17:25:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6RMs33vlw4 00:24:18.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.347 17:25:18 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6RMs33vlw4 00:24:18.347 17:25:18 keyring_file -- keyring/file.sh@30 -- # tgtpid=85202 00:24:18.347 17:25:18 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:18.347 17:25:18 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85202 00:24:18.347 17:25:18 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85202 ']' 00:24:18.347 17:25:18 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.347 17:25:18 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.347 17:25:18 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.347 17:25:18 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.347 17:25:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:18.348 [2024-11-04 17:25:18.958956] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:24:18.348 [2024-11-04 17:25:18.959229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85202 ] 00:24:18.348 [2024-11-04 17:25:19.108755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.607 [2024-11-04 17:25:19.162165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.607 [2024-11-04 17:25:19.237105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:24:18.866 17:25:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:18.866 [2024-11-04 17:25:19.445043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.866 null0 00:24:18.866 [2024-11-04 17:25:19.477021] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:18.866 [2024-11-04 17:25:19.477199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.866 17:25:19 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:18.866 [2024-11-04 17:25:19.505017] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:18.866 request: 00:24:18.866 { 00:24:18.866 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:18.866 "secure_channel": false, 00:24:18.866 "listen_address": { 00:24:18.866 "trtype": "tcp", 00:24:18.866 "traddr": "127.0.0.1", 00:24:18.866 "trsvcid": "4420" 00:24:18.866 }, 00:24:18.866 "method": "nvmf_subsystem_add_listener", 00:24:18.866 "req_id": 1 00:24:18.866 } 00:24:18.866 Got JSON-RPC error response 00:24:18.866 response: 00:24:18.866 { 00:24:18.866 "code": -32602, 00:24:18.866 "message": "Invalid parameters" 00:24:18.866 } 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.866 17:25:19 keyring_file -- keyring/file.sh@47 -- # bperfpid=85212 00:24:18.866 17:25:19 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:18.866 17:25:19 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85212 /var/tmp/bperf.sock 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85212 ']' 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.866 17:25:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:18.866 [2024-11-04 17:25:19.571690] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:24:18.866 [2024-11-04 17:25:19.572297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85212 ] 00:24:19.125 [2024-11-04 17:25:19.725764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.125 [2024-11-04 17:25:19.777331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.125 [2024-11-04 17:25:19.833560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:19.125 17:25:19 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.125 17:25:19 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:24:19.125 17:25:19 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:19.125 17:25:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:19.384 17:25:20 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6RMs33vlw4 00:24:19.384 17:25:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6RMs33vlw4 00:24:19.643 17:25:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:19.643 17:25:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:19.643 17:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.643 17:25:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.643 17:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:19.902 17:25:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VAGRHyB3Re == \/\t\m\p\/\t\m\p\.\V\A\G\R\H\y\B\3\R\e ]] 00:24:19.902 17:25:20 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:19.902 17:25:20 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:19.902 17:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:19.902 17:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.902 17:25:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.161 17:25:20 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6RMs33vlw4 == \/\t\m\p\/\t\m\p\.\6\R\M\s\3\3\v\l\w\4 ]] 00:24:20.161 17:25:20 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:20.161 17:25:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:20.161 17:25:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:20.161 17:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:20.161 17:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:20.161 17:25:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.424 17:25:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:20.424 17:25:21 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:20.424 17:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:20.424 17:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:20.424 17:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:20.424 17:25:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.424 17:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:20.682 17:25:21 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:20.682 17:25:21 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.682 17:25:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:20.941 [2024-11-04 17:25:21.734135] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.200 nvme0n1 00:24:21.200 17:25:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:21.200 17:25:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:21.200 17:25:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:21.200 17:25:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:21.201 17:25:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:21.201 17:25:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.460 17:25:22 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:21.460 17:25:22 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:21.460 17:25:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:21.460 17:25:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:21.460 17:25:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:21.460 17:25:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.460 17:25:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:21.719 17:25:22 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:21.719 17:25:22 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:21.719 Running I/O for 1 seconds... 00:24:22.656 13578.00 IOPS, 53.04 MiB/s 00:24:22.656 Latency(us) 00:24:22.656 [2024-11-04T17:25:23.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.656 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:22.656 nvme0n1 : 1.01 13628.44 53.24 0.00 0.00 9368.02 3932.16 18826.71 00:24:22.656 [2024-11-04T17:25:23.460Z] =================================================================================================================== 00:24:22.656 [2024-11-04T17:25:23.460Z] Total : 13628.44 53.24 0.00 0.00 9368.02 3932.16 18826.71 00:24:22.656 { 00:24:22.656 "results": [ 00:24:22.656 { 00:24:22.656 "job": "nvme0n1", 00:24:22.656 "core_mask": "0x2", 00:24:22.656 "workload": "randrw", 00:24:22.656 "percentage": 50, 00:24:22.656 "status": "finished", 00:24:22.656 "queue_depth": 128, 00:24:22.656 "io_size": 4096, 00:24:22.656 "runtime": 1.005838, 00:24:22.656 "iops": 13628.437183721435, 00:24:22.656 "mibps": 53.236082748911855, 00:24:22.656 "io_failed": 0, 00:24:22.656 "io_timeout": 0, 00:24:22.656 "avg_latency_us": 9368.02367509351, 00:24:22.656 "min_latency_us": 3932.16, 00:24:22.656 "max_latency_us": 18826.705454545456 00:24:22.656 } 00:24:22.656 ], 00:24:22.656 "core_count": 1 00:24:22.656 } 00:24:22.656 17:25:23 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:22.656 17:25:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:22.915 17:25:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:22.915 17:25:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:22.915 17:25:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:22.915 17:25:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:22.915 17:25:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.915 17:25:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.174 17:25:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:23.174 17:25:23 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:23.174 17:25:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:23.174 17:25:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:23.174 17:25:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:23.174 17:25:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:23.174 17:25:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.434 17:25:24 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:23.434 17:25:24 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:23.434 17:25:24 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:23.434 17:25:24 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:23.434 17:25:24 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:23.434 17:25:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.434 17:25:24 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:23.434 17:25:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.434 17:25:24 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:23.434 17:25:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:23.693 [2024-11-04 17:25:24.431326] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:23.693 [2024-11-04 17:25:24.431652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7770 (107): Transport endpoint is not connected 00:24:23.693 [2024-11-04 17:25:24.432628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b7770 (9): Bad file descriptor 00:24:23.693 [2024-11-04 17:25:24.433626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:23.693 [2024-11-04 17:25:24.433666] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:23.693 [2024-11-04 17:25:24.433692] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:23.693 [2024-11-04 17:25:24.433703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:23.693 request: 00:24:23.693 { 00:24:23.693 "name": "nvme0", 00:24:23.693 "trtype": "tcp", 00:24:23.693 "traddr": "127.0.0.1", 00:24:23.693 "adrfam": "ipv4", 00:24:23.693 "trsvcid": "4420", 00:24:23.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:23.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:23.693 "prchk_reftag": false, 00:24:23.693 "prchk_guard": false, 00:24:23.693 "hdgst": false, 00:24:23.693 "ddgst": false, 00:24:23.693 "psk": "key1", 00:24:23.693 "allow_unrecognized_csi": false, 00:24:23.693 "method": "bdev_nvme_attach_controller", 00:24:23.693 "req_id": 1 00:24:23.693 } 00:24:23.693 Got JSON-RPC error response 00:24:23.693 response: 00:24:23.693 { 00:24:23.693 "code": -5, 00:24:23.693 "message": "Input/output error" 00:24:23.693 } 00:24:23.693 17:25:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:23.693 17:25:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.693 17:25:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.693 17:25:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.693 17:25:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:23.693 17:25:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:23.693 17:25:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:23.693 17:25:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:23.693 17:25:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.693 17:25:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:24.261 17:25:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:24.261 17:25:24 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:24.261 17:25:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:24.261 17:25:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:24.261 17:25:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:24.261 17:25:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:24.261 17:25:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:24.520 17:25:25 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:24.520 17:25:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:24.520 17:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:24.779 17:25:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:24.779 17:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:25.038 17:25:25 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:25.038 17:25:25 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:25.038 17:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:25.296 17:25:25 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:25.296 17:25:25 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.VAGRHyB3Re 00:24:25.296 17:25:25 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:25.296 17:25:25 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:25.296 17:25:25 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:25.296 17:25:25 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:25.296 17:25:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.296 17:25:25 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:25.296 17:25:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.296 17:25:25 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:25.296 17:25:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:25.555 [2024-11-04 17:25:26.106021] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VAGRHyB3Re': 0100660 00:24:25.555 [2024-11-04 17:25:26.106080] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:25.555 request: 00:24:25.555 { 00:24:25.555 "name": "key0", 00:24:25.555 "path": "/tmp/tmp.VAGRHyB3Re", 00:24:25.555 "method": "keyring_file_add_key", 00:24:25.555 "req_id": 1 00:24:25.555 } 00:24:25.555 Got JSON-RPC error response 00:24:25.555 response: 00:24:25.555 { 00:24:25.555 "code": -1, 00:24:25.555 "message": "Operation not permitted" 00:24:25.555 } 00:24:25.555 17:25:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:25.555 17:25:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.555 17:25:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:25.555 17:25:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.555 17:25:26 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.VAGRHyB3Re 00:24:25.555 17:25:26 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:25.555 17:25:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VAGRHyB3Re 00:24:25.555 17:25:26 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.VAGRHyB3Re 00:24:25.814 17:25:26 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:24:25.814 17:25:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:25.814 17:25:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:25.814 17:25:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:25.814 17:25:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:25.814 17:25:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:26.073 17:25:26 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:24:26.073 17:25:26 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:26.073 17:25:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:26.073 [2024-11-04 17:25:26.850164] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VAGRHyB3Re': No such file or directory 00:24:26.073 [2024-11-04 17:25:26.850198] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:26.073 [2024-11-04 17:25:26.850228] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:26.073 [2024-11-04 17:25:26.850239] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:24:26.073 [2024-11-04 17:25:26.850250] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:26.073 [2024-11-04 17:25:26.850259] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:26.073 request: 00:24:26.073 { 00:24:26.073 "name": "nvme0", 00:24:26.073 "trtype": "tcp", 00:24:26.073 "traddr": "127.0.0.1", 00:24:26.073 "adrfam": "ipv4", 00:24:26.073 "trsvcid": "4420", 00:24:26.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:26.073 "prchk_reftag": false, 00:24:26.073 "prchk_guard": false, 00:24:26.073 "hdgst": false, 00:24:26.073 "ddgst": false, 00:24:26.073 "psk": "key0", 00:24:26.073 "allow_unrecognized_csi": false, 00:24:26.073 "method": "bdev_nvme_attach_controller", 00:24:26.073 "req_id": 1 00:24:26.073 } 00:24:26.073 Got JSON-RPC error response 00:24:26.073 response: 00:24:26.073 { 00:24:26.073 "code": -19, 00:24:26.073 "message": "No such device" 00:24:26.073 } 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:26.073 17:25:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:26.073 17:25:26 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:24:26.073 17:25:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:26.332 17:25:27 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:26.332 17:25:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:26.332 17:25:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:26.332 17:25:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:26.332 17:25:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:26.332 17:25:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:26.332 17:25:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0swIDqEuJy 00:24:26.332 17:25:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:26.332 17:25:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:26.332 17:25:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:26.332 17:25:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:26.333 17:25:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:26.333 17:25:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:26.333 17:25:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:26.333 17:25:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0swIDqEuJy 00:24:26.333 17:25:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0swIDqEuJy 00:24:26.333 17:25:27 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.0swIDqEuJy 00:24:26.333 17:25:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0swIDqEuJy 00:24:26.333 17:25:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0swIDqEuJy 00:24:26.901 17:25:27 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:26.901 17:25:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:26.901 nvme0n1 00:24:27.160 17:25:27 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:24:27.160 17:25:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:27.160 17:25:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:27.160 17:25:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:27.160 17:25:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:27.160 17:25:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:27.418 17:25:27 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:24:27.418 17:25:27 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:24:27.418 17:25:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:27.418 17:25:28 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:24:27.418 17:25:28 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:24:27.418 17:25:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:27.418 17:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:27.418 17:25:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:27.677 17:25:28 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:24:27.677 17:25:28 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:24:27.677 17:25:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:27.677 17:25:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:27.677 17:25:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:27.677 17:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:27.677 17:25:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:27.937 17:25:28 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:24:27.937 17:25:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:27.937 17:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:28.195 17:25:28 keyring_file -- keyring/file.sh@105 -- # jq length 00:24:28.195 17:25:28 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:24:28.195 17:25:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:28.454 17:25:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:24:28.454 17:25:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0swIDqEuJy 00:24:28.454 17:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0swIDqEuJy 00:24:28.733 17:25:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6RMs33vlw4 00:24:28.733 17:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6RMs33vlw4 00:24:28.991 17:25:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:28.991 17:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:29.250 nvme0n1 00:24:29.250 17:25:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:24:29.250 17:25:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:29.819 17:25:30 keyring_file -- keyring/file.sh@113 -- # config='{ 00:24:29.819 "subsystems": [ 00:24:29.819 { 00:24:29.819 "subsystem": "keyring", 00:24:29.819 "config": [ 00:24:29.819 { 00:24:29.819 "method": "keyring_file_add_key", 00:24:29.819 "params": { 00:24:29.819 "name": "key0", 00:24:29.819 "path": "/tmp/tmp.0swIDqEuJy" 00:24:29.819 } 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "method": "keyring_file_add_key", 00:24:29.819 "params": { 00:24:29.819 "name": "key1", 00:24:29.819 "path": "/tmp/tmp.6RMs33vlw4" 00:24:29.819 } 00:24:29.819 } 00:24:29.819 ] 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "subsystem": "iobuf", 00:24:29.819 "config": [ 00:24:29.819 { 00:24:29.819 "method": "iobuf_set_options", 00:24:29.819 "params": { 00:24:29.819 "small_pool_count": 8192, 00:24:29.819 "large_pool_count": 1024, 00:24:29.819 "small_bufsize": 8192, 00:24:29.819 "large_bufsize": 135168, 00:24:29.819 "enable_numa": false 00:24:29.819 } 00:24:29.819 } 00:24:29.819 ] 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "subsystem": "sock", 00:24:29.819 "config": [ 00:24:29.819 { 00:24:29.819 "method": "sock_set_default_impl", 00:24:29.819 "params": { 00:24:29.819 "impl_name": "uring" 00:24:29.819 } 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "method": "sock_impl_set_options", 00:24:29.819 "params": { 00:24:29.819 "impl_name": "ssl", 00:24:29.819 "recv_buf_size": 4096, 00:24:29.819 "send_buf_size": 4096, 00:24:29.819 "enable_recv_pipe": true, 00:24:29.819 "enable_quickack": false, 00:24:29.819 "enable_placement_id": 0, 00:24:29.819 "enable_zerocopy_send_server": true, 00:24:29.819 "enable_zerocopy_send_client": false, 00:24:29.819 "zerocopy_threshold": 0, 00:24:29.819 "tls_version": 0, 00:24:29.819 "enable_ktls": false 00:24:29.819 } 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "method": "sock_impl_set_options", 00:24:29.819 "params": { 00:24:29.819 "impl_name": "posix", 00:24:29.819 "recv_buf_size": 2097152, 00:24:29.819 "send_buf_size": 2097152, 00:24:29.819 "enable_recv_pipe": true, 00:24:29.819 "enable_quickack": false, 00:24:29.819 "enable_placement_id": 0, 00:24:29.819 "enable_zerocopy_send_server": true, 00:24:29.819 "enable_zerocopy_send_client": false, 00:24:29.819 "zerocopy_threshold": 0, 00:24:29.819 "tls_version": 0, 00:24:29.819 "enable_ktls": false 00:24:29.819 } 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "method": "sock_impl_set_options", 00:24:29.819 "params": { 00:24:29.819 "impl_name": "uring", 00:24:29.819 "recv_buf_size": 2097152, 00:24:29.819 "send_buf_size": 2097152, 00:24:29.819 "enable_recv_pipe": true, 00:24:29.819 "enable_quickack": false, 00:24:29.819 "enable_placement_id": 0, 00:24:29.819 "enable_zerocopy_send_server": false, 00:24:29.819 "enable_zerocopy_send_client": false, 00:24:29.819 "zerocopy_threshold": 0, 00:24:29.819 "tls_version": 0, 00:24:29.819 "enable_ktls": false 00:24:29.819 } 00:24:29.819 } 00:24:29.819 ] 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "subsystem": "vmd", 00:24:29.819 "config": [] 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "subsystem": "accel", 00:24:29.819 "config": [ 00:24:29.819 { 00:24:29.819 "method": "accel_set_options", 00:24:29.819 "params": { 00:24:29.819 "small_cache_size": 128, 00:24:29.819 "large_cache_size": 16, 00:24:29.819 "task_count": 2048, 00:24:29.819 "sequence_count": 2048, 00:24:29.819 "buf_count": 2048 00:24:29.819 } 00:24:29.819 } 00:24:29.819 ] 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "subsystem": "bdev", 00:24:29.819 "config": [ 00:24:29.819 { 00:24:29.819 "method": "bdev_set_options", 00:24:29.819 "params": { 00:24:29.819 "bdev_io_pool_size": 65535, 00:24:29.819 "bdev_io_cache_size": 256, 00:24:29.819 "bdev_auto_examine": true, 00:24:29.819 "iobuf_small_cache_size": 128, 00:24:29.819 "iobuf_large_cache_size": 16 00:24:29.819 } 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "method": "bdev_raid_set_options", 00:24:29.819 "params": { 00:24:29.819 "process_window_size_kb": 1024, 00:24:29.819 "process_max_bandwidth_mb_sec": 0 00:24:29.819 } 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "method": "bdev_iscsi_set_options", 00:24:29.819 "params": { 00:24:29.819 "timeout_sec": 30 00:24:29.819 } 00:24:29.819 }, 00:24:29.819 { 00:24:29.819 "method": "bdev_nvme_set_options", 00:24:29.819 "params": { 00:24:29.819 "action_on_timeout": "none", 00:24:29.819 "timeout_us": 0, 00:24:29.819 "timeout_admin_us": 0, 00:24:29.819 "keep_alive_timeout_ms": 10000, 00:24:29.819 "arbitration_burst": 0, 00:24:29.819 "low_priority_weight": 0, 00:24:29.819 "medium_priority_weight": 0, 00:24:29.819 "high_priority_weight": 0, 00:24:29.819 "nvme_adminq_poll_period_us": 10000, 00:24:29.819 "nvme_ioq_poll_period_us": 0, 00:24:29.819 "io_queue_requests": 512, 00:24:29.819 "delay_cmd_submit": true, 00:24:29.819 "transport_retry_count": 4, 00:24:29.819 "bdev_retry_count": 3, 00:24:29.819 "transport_ack_timeout": 0, 00:24:29.819 "ctrlr_loss_timeout_sec": 0, 00:24:29.819 "reconnect_delay_sec": 0, 00:24:29.819 "fast_io_fail_timeout_sec": 0, 00:24:29.819 "disable_auto_failback": false, 00:24:29.819 "generate_uuids": false, 00:24:29.819 "transport_tos": 0, 00:24:29.819 "nvme_error_stat": false, 00:24:29.819 "rdma_srq_size": 0, 00:24:29.819 "io_path_stat": false, 00:24:29.819 "allow_accel_sequence": false, 00:24:29.820 "rdma_max_cq_size": 0, 00:24:29.820 "rdma_cm_event_timeout_ms": 0, 00:24:29.820 "dhchap_digests": [ 00:24:29.820 "sha256", 00:24:29.820 "sha384", 00:24:29.820 "sha512" 00:24:29.820 ], 00:24:29.820 "dhchap_dhgroups": [ 00:24:29.820 "null", 00:24:29.820 "ffdhe2048", 00:24:29.820 "ffdhe3072", 00:24:29.820 "ffdhe4096", 00:24:29.820 "ffdhe6144", 00:24:29.820 "ffdhe8192" 00:24:29.820 ] 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "bdev_nvme_attach_controller", 00:24:29.820 "params": { 00:24:29.820 "name": "nvme0", 00:24:29.820 "trtype": "TCP", 00:24:29.820 "adrfam": "IPv4", 00:24:29.820 "traddr": "127.0.0.1", 00:24:29.820 "trsvcid": "4420", 00:24:29.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:29.820 "prchk_reftag": false, 00:24:29.820 "prchk_guard": false, 00:24:29.820 "ctrlr_loss_timeout_sec": 0, 00:24:29.820 "reconnect_delay_sec": 0, 00:24:29.820 "fast_io_fail_timeout_sec": 0, 00:24:29.820 "psk": "key0", 00:24:29.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:29.820 "hdgst": false, 00:24:29.820 "ddgst": false, 00:24:29.820 "multipath": "multipath" 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "bdev_nvme_set_hotplug", 00:24:29.820 "params": { 00:24:29.820 "period_us": 100000, 00:24:29.820 "enable": false 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "bdev_wait_for_examine" 00:24:29.820 } 00:24:29.820 ] 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "subsystem": "nbd", 00:24:29.820 "config": [] 00:24:29.820 } 00:24:29.820 ] 00:24:29.820 }' 00:24:29.820 17:25:30 keyring_file -- keyring/file.sh@115 -- # killprocess 85212 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85212 ']' 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85212 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@957 -- # uname 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85212 00:24:29.820 killing process with pid 85212 00:24:29.820 Received shutdown signal, test time was about 1.000000 seconds 00:24:29.820 00:24:29.820 Latency(us) 00:24:29.820 [2024-11-04T17:25:30.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.820 [2024-11-04T17:25:30.624Z] =================================================================================================================== 00:24:29.820 [2024-11-04T17:25:30.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85212' 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@971 -- # kill 85212 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@976 -- # wait 85212 00:24:29.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.820 17:25:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=85450 00:24:29.820 17:25:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85450 /var/tmp/bperf.sock 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85450 ']' 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:29.820 17:25:30 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.820 17:25:30 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:29.820 17:25:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:24:29.820 "subsystems": [ 00:24:29.820 { 00:24:29.820 "subsystem": "keyring", 00:24:29.820 "config": [ 00:24:29.820 { 00:24:29.820 "method": "keyring_file_add_key", 00:24:29.820 "params": { 00:24:29.820 "name": "key0", 00:24:29.820 "path": "/tmp/tmp.0swIDqEuJy" 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "keyring_file_add_key", 00:24:29.820 "params": { 00:24:29.820 "name": "key1", 00:24:29.820 "path": "/tmp/tmp.6RMs33vlw4" 00:24:29.820 } 00:24:29.820 } 00:24:29.820 ] 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "subsystem": "iobuf", 00:24:29.820 "config": [ 00:24:29.820 { 00:24:29.820 "method": "iobuf_set_options", 00:24:29.820 "params": { 00:24:29.820 "small_pool_count": 8192, 00:24:29.820 "large_pool_count": 1024, 00:24:29.820 "small_bufsize": 8192, 00:24:29.820 "large_bufsize": 135168, 00:24:29.820 "enable_numa": false 00:24:29.820 } 00:24:29.820 } 00:24:29.820 ] 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "subsystem": "sock", 00:24:29.820 "config": [ 00:24:29.820 { 00:24:29.820 "method": "sock_set_default_impl", 00:24:29.820 "params": { 00:24:29.820 "impl_name": "uring" 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "sock_impl_set_options", 00:24:29.820 "params": { 00:24:29.820 "impl_name": "ssl", 00:24:29.820 "recv_buf_size": 4096, 00:24:29.820 "send_buf_size": 4096, 00:24:29.820 "enable_recv_pipe": true, 00:24:29.820 "enable_quickack": false, 00:24:29.820 "enable_placement_id": 0, 00:24:29.820 "enable_zerocopy_send_server": true, 00:24:29.820 "enable_zerocopy_send_client": false, 00:24:29.820 "zerocopy_threshold": 0, 00:24:29.820 "tls_version": 0, 00:24:29.820 "enable_ktls": false 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "sock_impl_set_options", 00:24:29.820 "params": { 00:24:29.820 "impl_name": "posix", 00:24:29.820 "recv_buf_size": 2097152, 00:24:29.820 "send_buf_size": 2097152, 00:24:29.820 "enable_recv_pipe": true, 00:24:29.820 "enable_quickack": false, 00:24:29.820 "enable_placement_id": 0, 00:24:29.820 "enable_zerocopy_send_server": true, 00:24:29.820 "enable_zerocopy_send_client": false, 00:24:29.820 "zerocopy_threshold": 0, 00:24:29.820 "tls_version": 0, 00:24:29.820 "enable_ktls": false 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "sock_impl_set_options", 00:24:29.820 "params": { 00:24:29.820 "impl_name": "uring", 00:24:29.820 "recv_buf_size": 2097152, 00:24:29.820 "send_buf_size": 2097152, 00:24:29.820 "enable_recv_pipe": true, 00:24:29.820 "enable_quickack": false, 00:24:29.820 "enable_placement_id": 0, 00:24:29.820 "enable_zerocopy_send_server": false, 00:24:29.820 "enable_zerocopy_send_client": false, 00:24:29.820 "zerocopy_threshold": 0, 00:24:29.820 "tls_version": 0, 00:24:29.820 "enable_ktls": false 00:24:29.820 } 00:24:29.820 } 00:24:29.820 ] 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "subsystem": "vmd", 00:24:29.820 "config": [] 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "subsystem": "accel", 00:24:29.820 "config": [ 00:24:29.820 { 00:24:29.820 "method": "accel_set_options", 00:24:29.820 "params": { 00:24:29.820 "small_cache_size": 128, 00:24:29.820 "large_cache_size": 16, 00:24:29.820 "task_count": 2048, 00:24:29.820 "sequence_count": 2048, 00:24:29.820 "buf_count": 2048 00:24:29.820 } 00:24:29.820 } 00:24:29.820 ] 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "subsystem": "bdev", 00:24:29.820 "config": [ 00:24:29.820 { 00:24:29.820 "method": "bdev_set_options", 00:24:29.820 "params": { 00:24:29.820 "bdev_io_pool_size": 65535, 00:24:29.820 "bdev_io_cache_size": 256, 00:24:29.820 "bdev_auto_examine": true, 00:24:29.820 "iobuf_small_cache_size": 128, 00:24:29.820 "iobuf_large_cache_size": 16 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "bdev_raid_set_options", 00:24:29.820 "params": { 00:24:29.820 "process_window_size_kb": 1024, 00:24:29.820 "process_max_bandwidth_mb_sec": 0 00:24:29.820 } 00:24:29.820 }, 00:24:29.820 { 00:24:29.820 "method": "bdev_iscsi_set_options", 00:24:29.820 "params": { 00:24:29.820 "timeout_sec": 30 00:24:29.821 } 00:24:29.821 }, 00:24:29.821 { 00:24:29.821 "method": "bdev_nvme_set_options", 00:24:29.821 "params": { 00:24:29.821 "action_on_timeout": "none", 00:24:29.821 "timeout_us": 0, 00:24:29.821 "timeout_admin_us": 0, 00:24:29.821 "keep_alive_timeout_ms": 10000, 00:24:29.821 "arbitration_burst": 0, 00:24:29.821 "low_priority_weight": 0, 00:24:29.821 "medium_priority_weight": 0, 00:24:29.821 "high_priority_weight": 0, 00:24:29.821 "nvme_adminq_poll_period_us": 10000, 00:24:29.821 "nvme_io 17:25:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:29.821 q_poll_period_us": 0, 00:24:29.821 "io_queue_requests": 512, 00:24:29.821 "delay_cmd_submit": true, 00:24:29.821 "transport_retry_count": 4, 00:24:29.821 "bdev_retry_count": 3, 00:24:29.821 "transport_ack_timeout": 0, 00:24:29.821 "ctrlr_loss_timeout_sec": 0, 00:24:29.821 "reconnect_delay_sec": 0, 00:24:29.821 "fast_io_fail_timeout_sec": 0, 00:24:29.821 "disable_auto_failback": false, 00:24:29.821 "generate_uuids": false, 00:24:29.821 "transport_tos": 0, 00:24:29.821 "nvme_error_stat": false, 00:24:29.821 "rdma_srq_size": 0, 00:24:29.821 "io_path_stat": false, 00:24:29.821 "allow_accel_sequence": false, 00:24:29.821 "rdma_max_cq_size": 0, 00:24:29.821 "rdma_cm_event_timeout_ms": 0, 00:24:29.821 "dhchap_digests": [ 00:24:29.821 "sha256", 00:24:29.821 "sha384", 00:24:29.821 "sha512" 00:24:29.821 ], 00:24:29.821 "dhchap_dhgroups": [ 00:24:29.821 "null", 00:24:29.821 "ffdhe2048", 00:24:29.821 "ffdhe3072", 00:24:29.821 "ffdhe4096", 00:24:29.821 "ffdhe6144", 00:24:29.821 "ffdhe8192" 00:24:29.821 ] 00:24:29.821 } 00:24:29.821 }, 00:24:29.821 { 00:24:29.821 "method": "bdev_nvme_attach_controller", 00:24:29.821 "params": { 00:24:29.821 "name": "nvme0", 00:24:29.821 "trtype": "TCP", 00:24:29.821 "adrfam": "IPv4", 00:24:29.821 "traddr": "127.0.0.1", 00:24:29.821 "trsvcid": "4420", 00:24:29.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:29.821 "prchk_reftag": false, 00:24:29.821 "prchk_guard": false, 00:24:29.821 "ctrlr_loss_timeout_sec": 0, 00:24:29.821 "reconnect_delay_sec": 0, 00:24:29.821 "fast_io_fail_timeout_sec": 0, 00:24:29.821 "psk": "key0", 00:24:29.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:29.821 "hdgst": false, 00:24:29.821 "ddgst": false, 00:24:29.821 "multipath": "multipath" 00:24:29.821 } 00:24:29.821 }, 00:24:29.821 { 00:24:29.821 "method": "bdev_nvme_set_hotplug", 00:24:29.821 "params": { 00:24:29.821 "period_us": 100000, 00:24:29.821 "enable": false 00:24:29.821 } 00:24:29.821 }, 00:24:29.821 { 00:24:29.821 "method": "bdev_wait_for_examine" 00:24:29.821 } 00:24:29.821 ] 00:24:29.821 }, 00:24:29.821 { 00:24:29.821 "subsystem": "nbd", 00:24:29.821 "config": [] 00:24:29.821 } 00:24:29.821 ] 00:24:29.821 }' 00:24:30.080 [2024-11-04 17:25:30.654097] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:24:30.080 [2024-11-04 17:25:30.654192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85450 ] 00:24:30.080 [2024-11-04 17:25:30.786589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.080 [2024-11-04 17:25:30.826066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.339 [2024-11-04 17:25:30.975916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.339 [2024-11-04 17:25:31.039618] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.906 17:25:31 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:30.906 17:25:31 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:24:30.906 17:25:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:24:30.906 17:25:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.906 17:25:31 keyring_file -- keyring/file.sh@121 -- # jq length 00:24:31.165 17:25:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:31.165 17:25:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:24:31.165 17:25:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:31.165 17:25:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:31.165 17:25:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:31.165 17:25:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:31.165 17:25:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:31.424 17:25:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:24:31.424 17:25:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:24:31.424 17:25:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:31.424 17:25:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:31.424 17:25:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:31.424 17:25:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:31.424 17:25:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:31.682 17:25:32 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:24:31.682 17:25:32 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:24:31.682 17:25:32 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:24:31.682 17:25:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:31.940 17:25:32 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:24:31.940 17:25:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:31.940 17:25:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.0swIDqEuJy /tmp/tmp.6RMs33vlw4 00:24:31.940 17:25:32 keyring_file -- keyring/file.sh@20 -- # killprocess 85450 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85450 ']' 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85450 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@957 -- # uname 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85450 00:24:31.941 killing process with pid 85450 00:24:31.941 Received shutdown signal, test time was about 1.000000 seconds 00:24:31.941 00:24:31.941 Latency(us) 00:24:31.941 [2024-11-04T17:25:32.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.941 [2024-11-04T17:25:32.745Z] =================================================================================================================== 00:24:31.941 [2024-11-04T17:25:32.745Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85450' 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@971 -- # kill 85450 00:24:31.941 17:25:32 keyring_file -- common/autotest_common.sh@976 -- # wait 85450 00:24:32.199 17:25:32 keyring_file -- keyring/file.sh@21 -- # killprocess 85202 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85202 ']' 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85202 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@957 -- # uname 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85202 00:24:32.199 killing process with pid 85202 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85202' 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@971 -- # kill 85202 00:24:32.199 17:25:32 keyring_file -- common/autotest_common.sh@976 -- # wait 85202 00:24:32.790 00:24:32.790 real 0m14.831s 00:24:32.790 user 0m37.469s 00:24:32.790 sys 0m2.844s 00:24:32.790 17:25:33 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:32.790 17:25:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:32.790 ************************************ 00:24:32.790 END TEST keyring_file 00:24:32.790 ************************************ 00:24:32.790 17:25:33 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:24:32.790 17:25:33 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:32.790 17:25:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:32.790 17:25:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:32.790 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:24:32.790 ************************************ 00:24:32.790 START TEST keyring_linux 00:24:32.790 ************************************ 00:24:32.790 17:25:33 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:32.790 Joined session keyring: 636015245 00:24:32.790 * Looking for test storage... 00:24:32.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:32.790 17:25:33 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:32.790 17:25:33 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:24:32.790 17:25:33 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:33.049 17:25:33 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.049 17:25:33 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@345 -- # : 1 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@368 -- # return 0 00:24:33.050 17:25:33 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.050 17:25:33 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:33.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.050 --rc genhtml_branch_coverage=1 00:24:33.050 --rc genhtml_function_coverage=1 00:24:33.050 --rc genhtml_legend=1 00:24:33.050 --rc geninfo_all_blocks=1 00:24:33.050 --rc geninfo_unexecuted_blocks=1 00:24:33.050 00:24:33.050 ' 00:24:33.050 17:25:33 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:33.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.050 --rc genhtml_branch_coverage=1 00:24:33.050 --rc genhtml_function_coverage=1 00:24:33.050 --rc genhtml_legend=1 00:24:33.050 --rc geninfo_all_blocks=1 00:24:33.050 --rc geninfo_unexecuted_blocks=1 00:24:33.050 00:24:33.050 ' 00:24:33.050 17:25:33 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:33.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.050 --rc genhtml_branch_coverage=1 00:24:33.050 --rc genhtml_function_coverage=1 00:24:33.050 --rc genhtml_legend=1 00:24:33.050 --rc geninfo_all_blocks=1 00:24:33.050 --rc geninfo_unexecuted_blocks=1 00:24:33.050 00:24:33.050 ' 00:24:33.050 17:25:33 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:33.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.050 --rc genhtml_branch_coverage=1 00:24:33.050 --rc genhtml_function_coverage=1 00:24:33.050 --rc genhtml_legend=1 00:24:33.050 --rc geninfo_all_blocks=1 00:24:33.050 --rc geninfo_unexecuted_blocks=1 00:24:33.050 00:24:33.050 ' 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8c073979-9b92-4972-b56b-796474446288 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=8c073979-9b92-4972-b56b-796474446288 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.050 17:25:33 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.050 17:25:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.050 17:25:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.050 17:25:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.050 17:25:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:33.050 17:25:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:33.050 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:33.050 /tmp/:spdk-test:key0 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:33.050 17:25:33 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:33.050 /tmp/:spdk-test:key1 00:24:33.050 17:25:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85580 00:24:33.050 17:25:33 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:33.051 17:25:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85580 00:24:33.051 17:25:33 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85580 ']' 00:24:33.051 17:25:33 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.051 17:25:33 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.051 17:25:33 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.051 17:25:33 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.051 17:25:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:33.051 [2024-11-04 17:25:33.808719] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:24:33.051 [2024-11-04 17:25:33.809022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85580 ] 00:24:33.308 [2024-11-04 17:25:33.954948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.308 [2024-11-04 17:25:34.005768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.308 [2024-11-04 17:25:34.092190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:24:34.245 17:25:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:34.245 [2024-11-04 17:25:34.744484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.245 null0 00:24:34.245 [2024-11-04 17:25:34.776461] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:34.245 [2024-11-04 17:25:34.776655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.245 17:25:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:34.245 231731295 00:24:34.245 17:25:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:34.245 875405738 00:24:34.245 17:25:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85598 00:24:34.245 17:25:34 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:34.245 17:25:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85598 /var/tmp/bperf.sock 00:24:34.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85598 ']' 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:34.245 17:25:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:34.245 [2024-11-04 17:25:34.858460] Starting SPDK v25.01-pre git sha1 16e58adb1 / DPDK 24.03.0 initialization... 00:24:34.245 [2024-11-04 17:25:34.859296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85598 ] 00:24:34.245 [2024-11-04 17:25:34.996298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.245 [2024-11-04 17:25:35.043644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.504 17:25:35 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:34.504 17:25:35 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:24:34.504 17:25:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:34.504 17:25:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:34.504 17:25:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:34.504 17:25:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:34.763 [2024-11-04 17:25:35.557122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:35.022 17:25:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:35.022 17:25:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:35.022 [2024-11-04 17:25:35.807903] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:35.283 nvme0n1 00:24:35.283 17:25:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:35.283 17:25:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:35.283 17:25:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:35.283 17:25:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:35.283 17:25:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:35.283 17:25:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:35.542 17:25:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:35.542 17:25:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:35.542 17:25:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:35.542 17:25:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:35.542 17:25:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:35.542 17:25:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:35.542 17:25:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:35.801 17:25:36 keyring_linux -- keyring/linux.sh@25 -- # sn=231731295 00:24:35.801 17:25:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:35.801 17:25:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:35.801 17:25:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 231731295 == \2\3\1\7\3\1\2\9\5 ]] 00:24:35.801 17:25:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 231731295 00:24:35.801 17:25:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:35.801 17:25:36 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:35.801 Running I/O for 1 seconds... 00:24:36.738 12617.00 IOPS, 49.29 MiB/s 00:24:36.738 Latency(us) 00:24:36.738 [2024-11-04T17:25:37.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.738 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:36.738 nvme0n1 : 1.01 12620.72 49.30 0.00 0.00 10086.29 4557.73 15371.17 00:24:36.738 [2024-11-04T17:25:37.542Z] =================================================================================================================== 00:24:36.738 [2024-11-04T17:25:37.542Z] Total : 12620.72 49.30 0.00 0.00 10086.29 4557.73 15371.17 00:24:36.738 { 00:24:36.738 "results": [ 00:24:36.738 { 00:24:36.738 "job": "nvme0n1", 00:24:36.738 "core_mask": "0x2", 00:24:36.738 "workload": "randread", 00:24:36.738 "status": "finished", 00:24:36.738 "queue_depth": 128, 00:24:36.738 "io_size": 4096, 00:24:36.738 "runtime": 1.009847, 00:24:36.738 "iops": 12620.723733397237, 00:24:36.738 "mibps": 49.29970208358296, 00:24:36.738 "io_failed": 0, 00:24:36.738 "io_timeout": 0, 00:24:36.738 "avg_latency_us": 10086.29284867506, 00:24:36.738 "min_latency_us": 4557.730909090909, 00:24:36.738 "max_latency_us": 15371.17090909091 00:24:36.739 } 00:24:36.739 ], 00:24:36.739 "core_count": 1 00:24:36.739 } 00:24:36.739 17:25:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:36.739 17:25:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:36.997 17:25:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:36.997 17:25:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:36.997 17:25:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:36.997 17:25:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:36.997 17:25:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:36.997 17:25:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:37.256 17:25:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:37.256 17:25:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:37.256 17:25:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:37.256 17:25:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:37.256 17:25:38 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:24:37.256 17:25:38 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:37.257 17:25:38 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:37.257 17:25:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:37.257 17:25:38 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:37.257 17:25:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:37.257 17:25:38 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:37.257 17:25:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:37.516 [2024-11-04 17:25:38.311606] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:37.516 [2024-11-04 17:25:38.312317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e265d0 (107): Transport endpoint is not connected 00:24:37.516 [2024-11-04 17:25:38.313307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e265d0 (9): Bad file descriptor 00:24:37.516 [2024-11-04 17:25:38.314302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:37.516 [2024-11-04 17:25:38.314374] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:37.516 [2024-11-04 17:25:38.314386] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:37.516 [2024-11-04 17:25:38.314397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:37.516 request: 00:24:37.516 { 00:24:37.516 "name": "nvme0", 00:24:37.516 "trtype": "tcp", 00:24:37.516 "traddr": "127.0.0.1", 00:24:37.516 "adrfam": "ipv4", 00:24:37.516 "trsvcid": "4420", 00:24:37.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:37.516 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:37.516 "prchk_reftag": false, 00:24:37.516 "prchk_guard": false, 00:24:37.516 "hdgst": false, 00:24:37.516 "ddgst": false, 00:24:37.516 "psk": ":spdk-test:key1", 00:24:37.516 "allow_unrecognized_csi": false, 00:24:37.516 "method": "bdev_nvme_attach_controller", 00:24:37.516 "req_id": 1 00:24:37.516 } 00:24:37.516 Got JSON-RPC error response 00:24:37.516 response: 00:24:37.516 { 00:24:37.516 "code": -5, 00:24:37.516 "message": "Input/output error" 00:24:37.516 } 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@33 -- # sn=231731295 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 231731295 00:24:37.775 1 links removed 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@33 -- # sn=875405738 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 875405738 00:24:37.775 1 links removed 00:24:37.775 17:25:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85598 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85598 ']' 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85598 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:37.775 17:25:38 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85598 00:24:37.775 killing process with pid 85598 00:24:37.775 Received shutdown signal, test time was about 1.000000 seconds 00:24:37.775 00:24:37.775 Latency(us) 00:24:37.775 [2024-11-04T17:25:38.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.775 [2024-11-04T17:25:38.579Z] =================================================================================================================== 00:24:37.775 [2024-11-04T17:25:38.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85598' 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@971 -- # kill 85598 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@976 -- # wait 85598 00:24:37.776 17:25:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85580 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85580 ']' 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85580 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:37.776 17:25:38 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85580 00:24:38.035 killing process with pid 85580 00:24:38.035 17:25:38 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:38.035 17:25:38 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:38.035 17:25:38 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85580' 00:24:38.035 17:25:38 keyring_linux -- common/autotest_common.sh@971 -- # kill 85580 00:24:38.035 17:25:38 keyring_linux -- common/autotest_common.sh@976 -- # wait 85580 00:24:38.294 00:24:38.294 real 0m5.619s 00:24:38.294 user 0m10.246s 00:24:38.294 sys 0m1.677s 00:24:38.294 ************************************ 00:24:38.294 END TEST keyring_linux 00:24:38.294 ************************************ 00:24:38.294 17:25:39 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:38.294 17:25:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:38.553 17:25:39 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:38.553 17:25:39 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:24:38.553 17:25:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:38.553 17:25:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:38.553 17:25:39 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:24:38.553 17:25:39 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:24:38.553 17:25:39 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:24:38.553 17:25:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.553 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:24:38.553 17:25:39 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:24:38.553 17:25:39 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:24:38.553 17:25:39 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:24:38.553 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:24:40.456 INFO: APP EXITING 00:24:40.456 INFO: killing all VMs 00:24:40.456 INFO: killing vhost app 00:24:40.456 INFO: EXIT DONE 00:24:41.024 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:41.024 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:41.024 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:41.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:41.960 Cleaning 00:24:41.961 Removing: /var/run/dpdk/spdk0/config 00:24:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:41.961 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:41.961 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:41.961 Removing: /var/run/dpdk/spdk1/config 00:24:41.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:41.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:41.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:41.961 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:41.961 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:41.961 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:41.961 Removing: /var/run/dpdk/spdk2/config 00:24:41.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:41.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:41.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:41.961 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:41.961 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:41.961 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:41.961 Removing: /var/run/dpdk/spdk3/config 00:24:41.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:41.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:41.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:41.961 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:41.961 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:41.961 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:41.961 Removing: /var/run/dpdk/spdk4/config 00:24:41.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:41.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:41.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:41.961 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:41.961 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:41.961 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:41.961 Removing: /dev/shm/nvmf_trace.0 00:24:41.961 Removing: /dev/shm/spdk_tgt_trace.pid56731 00:24:41.961 Removing: /var/run/dpdk/spdk0 00:24:41.961 Removing: /var/run/dpdk/spdk1 00:24:41.961 Removing: /var/run/dpdk/spdk2 00:24:41.961 Removing: /var/run/dpdk/spdk3 00:24:41.961 Removing: /var/run/dpdk/spdk4 00:24:41.961 Removing: /var/run/dpdk/spdk_pid56577 00:24:41.961 Removing: /var/run/dpdk/spdk_pid56731 00:24:41.961 Removing: /var/run/dpdk/spdk_pid56937 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57018 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57051 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57161 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57179 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57313 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57514 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57662 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57735 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57817 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57916 00:24:41.961 Removing: /var/run/dpdk/spdk_pid57988 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58025 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58062 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58126 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58207 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58653 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58705 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58756 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58765 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58836 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58853 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58920 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58923 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58974 00:24:41.961 Removing: /var/run/dpdk/spdk_pid58985 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59025 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59035 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59171 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59207 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59289 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59616 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59633 00:24:41.961 Removing: /var/run/dpdk/spdk_pid59664 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59678 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59699 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59718 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59731 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59747 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59766 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59785 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59800 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59825 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59833 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59854 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59873 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59892 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59902 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59921 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59940 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59950 00:24:42.221 Removing: /var/run/dpdk/spdk_pid59986 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60005 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60029 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60103 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60137 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60141 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60175 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60179 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60192 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60236 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60249 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60282 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60287 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60302 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60306 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60321 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60327 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60342 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60346 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60380 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60407 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60416 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60445 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60454 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60462 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60502 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60514 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60540 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60553 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60561 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60568 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60576 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60583 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60591 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60598 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60680 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60728 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60840 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60874 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60919 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60939 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60955 00:24:42.221 Removing: /var/run/dpdk/spdk_pid60970 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61007 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61028 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61109 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61125 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61169 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61254 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61310 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61339 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61439 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61481 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61519 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61746 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61843 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61872 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61901 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61935 00:24:42.221 Removing: /var/run/dpdk/spdk_pid61973 00:24:42.480 Removing: /var/run/dpdk/spdk_pid62007 00:24:42.480 Removing: /var/run/dpdk/spdk_pid62039 00:24:42.480 Removing: /var/run/dpdk/spdk_pid62427 00:24:42.480 Removing: /var/run/dpdk/spdk_pid62462 00:24:42.480 Removing: /var/run/dpdk/spdk_pid62798 00:24:42.480 Removing: /var/run/dpdk/spdk_pid63269 00:24:42.480 Removing: /var/run/dpdk/spdk_pid63531 00:24:42.480 Removing: /var/run/dpdk/spdk_pid64385 00:24:42.480 Removing: /var/run/dpdk/spdk_pid65303 00:24:42.480 Removing: /var/run/dpdk/spdk_pid65420 00:24:42.480 Removing: /var/run/dpdk/spdk_pid65493 00:24:42.480 Removing: /var/run/dpdk/spdk_pid66894 00:24:42.480 Removing: /var/run/dpdk/spdk_pid67200 00:24:42.480 Removing: /var/run/dpdk/spdk_pid70946 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71298 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71407 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71542 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71563 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71585 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71606 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71704 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71841 00:24:42.480 Removing: /var/run/dpdk/spdk_pid71990 00:24:42.480 Removing: /var/run/dpdk/spdk_pid72064 00:24:42.480 Removing: /var/run/dpdk/spdk_pid72260 00:24:42.480 Removing: /var/run/dpdk/spdk_pid72323 00:24:42.480 Removing: /var/run/dpdk/spdk_pid72408 00:24:42.480 Removing: /var/run/dpdk/spdk_pid72771 00:24:42.480 Removing: /var/run/dpdk/spdk_pid73192 00:24:42.480 Removing: /var/run/dpdk/spdk_pid73193 00:24:42.480 Removing: /var/run/dpdk/spdk_pid73194 00:24:42.480 Removing: /var/run/dpdk/spdk_pid73454 00:24:42.480 Removing: /var/run/dpdk/spdk_pid73719 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74114 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74116 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74442 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74456 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74474 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74506 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74511 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74865 00:24:42.480 Removing: /var/run/dpdk/spdk_pid74914 00:24:42.480 Removing: /var/run/dpdk/spdk_pid75244 00:24:42.480 Removing: /var/run/dpdk/spdk_pid75442 00:24:42.480 Removing: /var/run/dpdk/spdk_pid75876 00:24:42.480 Removing: /var/run/dpdk/spdk_pid76412 00:24:42.480 Removing: /var/run/dpdk/spdk_pid77301 00:24:42.480 Removing: /var/run/dpdk/spdk_pid77932 00:24:42.480 Removing: /var/run/dpdk/spdk_pid77935 00:24:42.480 Removing: /var/run/dpdk/spdk_pid79975 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80029 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80078 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80132 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80244 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80297 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80345 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80398 00:24:42.480 Removing: /var/run/dpdk/spdk_pid80766 00:24:42.480 Removing: /var/run/dpdk/spdk_pid81975 00:24:42.480 Removing: /var/run/dpdk/spdk_pid82104 00:24:42.480 Removing: /var/run/dpdk/spdk_pid82352 00:24:42.480 Removing: /var/run/dpdk/spdk_pid82948 00:24:42.480 Removing: /var/run/dpdk/spdk_pid83108 00:24:42.480 Removing: /var/run/dpdk/spdk_pid83265 00:24:42.480 Removing: /var/run/dpdk/spdk_pid83362 00:24:42.480 Removing: /var/run/dpdk/spdk_pid83531 00:24:42.480 Removing: /var/run/dpdk/spdk_pid83640 00:24:42.480 Removing: /var/run/dpdk/spdk_pid84345 00:24:42.480 Removing: /var/run/dpdk/spdk_pid84375 00:24:42.480 Removing: /var/run/dpdk/spdk_pid84410 00:24:42.480 Removing: /var/run/dpdk/spdk_pid84665 00:24:42.480 Removing: /var/run/dpdk/spdk_pid84700 00:24:42.480 Removing: /var/run/dpdk/spdk_pid84731 00:24:42.480 Removing: /var/run/dpdk/spdk_pid85202 00:24:42.480 Removing: /var/run/dpdk/spdk_pid85212 00:24:42.480 Removing: /var/run/dpdk/spdk_pid85450 00:24:42.480 Removing: /var/run/dpdk/spdk_pid85580 00:24:42.480 Removing: /var/run/dpdk/spdk_pid85598 00:24:42.480 Clean 00:24:42.779 17:25:43 -- common/autotest_common.sh@1451 -- # return 0 00:24:42.779 17:25:43 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:24:42.779 17:25:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.779 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.779 17:25:43 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:24:42.779 17:25:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.779 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:24:42.779 17:25:43 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:42.779 17:25:43 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:42.779 17:25:43 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:42.779 17:25:43 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:24:42.779 17:25:43 -- spdk/autotest.sh@394 -- # hostname 00:24:42.779 17:25:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:43.037 geninfo: WARNING: invalid characters removed from testname! 00:25:04.961 17:26:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:07.492 17:26:08 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:10.025 17:26:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:12.557 17:26:12 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:15.089 17:26:15 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:16.992 17:26:17 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:19.521 17:26:19 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:19.521 17:26:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:19.521 17:26:19 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:19.521 17:26:19 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:19.521 17:26:19 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:19.521 17:26:19 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:19.521 + [[ -n 5215 ]] 00:25:19.521 + sudo kill 5215 00:25:19.529 [Pipeline] } 00:25:19.544 [Pipeline] // timeout 00:25:19.550 [Pipeline] } 00:25:19.564 [Pipeline] // stage 00:25:19.569 [Pipeline] } 00:25:19.583 [Pipeline] // catchError 00:25:19.592 [Pipeline] stage 00:25:19.595 [Pipeline] { (Stop VM) 00:25:19.607 [Pipeline] sh 00:25:19.887 + vagrant halt 00:25:22.464 ==> default: Halting domain... 00:25:29.038 [Pipeline] sh 00:25:29.313 + vagrant destroy -f 00:25:31.848 ==> default: Removing domain... 00:25:32.119 [Pipeline] sh 00:25:32.400 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:32.409 [Pipeline] } 00:25:32.424 [Pipeline] // stage 00:25:32.430 [Pipeline] } 00:25:32.444 [Pipeline] // dir 00:25:32.448 [Pipeline] } 00:25:32.461 [Pipeline] // wrap 00:25:32.466 [Pipeline] } 00:25:32.478 [Pipeline] // catchError 00:25:32.488 [Pipeline] stage 00:25:32.490 [Pipeline] { (Epilogue) 00:25:32.501 [Pipeline] sh 00:25:32.782 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:38.065 [Pipeline] catchError 00:25:38.068 [Pipeline] { 00:25:38.080 [Pipeline] sh 00:25:38.363 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:38.363 Artifacts sizes are good 00:25:38.374 [Pipeline] } 00:25:38.388 [Pipeline] // catchError 00:25:38.402 [Pipeline] archiveArtifacts 00:25:38.410 Archiving artifacts 00:25:38.534 [Pipeline] cleanWs 00:25:38.546 [WS-CLEANUP] Deleting project workspace... 00:25:38.546 [WS-CLEANUP] Deferred wipeout is used... 00:25:38.552 [WS-CLEANUP] done 00:25:38.554 [Pipeline] } 00:25:38.572 [Pipeline] // stage 00:25:38.577 [Pipeline] } 00:25:38.591 [Pipeline] // node 00:25:38.596 [Pipeline] End of Pipeline 00:25:38.635 Finished: SUCCESS